Giter Club home page Giter Club logo

powershellpracticeandstyle's Introduction

The PowerShell Best Practices and Style Guide

Table Of Contents

Creative Commons License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, please attribute to Don Jones, Matt Penny, Carlos Perez, Joel Bennett and the PowerShell Community.

You are free to:

Share โ€” copy and redistribute the material in any medium or format

Adapt โ€” remix, transform, and build upon the material

The authors encourage you to redistribute this content as widely as possible, but require that you give credit to the primary authors below, and that you notify us on GitHub of any improvements you make.

What are Best Practices

PowerShell Best Practices are what you should usually do as a starting point. They are ways of writing, thinking, and designing which make it harder to get into trouble. The point of a Best Practice is to help the reader to fall into the pit of success:

The Pit of Success: in stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our platform and frameworks. To the extent that we make it easy to get into trouble we fail.

-- Rico Mariani, MS Research MindSwap Oct 2003.

Like English spelling and grammar rules, PowerShell programming best practices and style rules nearly always have exceptions, but we are documenting a baseline for code structure, command design, programming, formatting, and even style which will help you to avoid common problems, and help you write more reusable, readable code -- because reusable code doesn't have to be rewritten, and readable code can be maintained.

Having said that, remember: the points in the Best Practices documents and the Style Guide are referred to as practices and guidelines, not rules. If you're having trouble getting something done because you're trying to avoid breaking a style or best practice rule, you've misunderstood the point: this document is pragmatic, rather than dogmatic. We'll leave dogmatism to teams and projects that require you to meet their specific guidelines.

Table of Contents

The guidelines are divided into these sections:

Current State:

Remember what we mean by Best Practices.

The PowerShell Best Practices are always evolving, and continue to be edited and updated as the language and tools (and our community understanding of them) evolve. We encourage you to check back for new editions at least twice a year, by visiting https://github.com/PoshCode/PowerShellPracticeAndStyle.

The PowerShell Style Guide in particular is in PREVIEW, and we are still actively working out our disagreements about the rules in the guide through the GitHub issues system.

Contributing

Please use the issues system or GitHub pull requests to make corrections, contributions, and other changes to the text - we welcome your contributions!

For more information, see CONTRIBUTING.

Credits

The Community Book of PowerShell Practices was originally compiled and edited by Don Jones and Matt Penny with input from the Windows PowerShell community on PowerShell.org.

Portions copyright (c) Don Jones, Matt Penny, 2014-2015

The PowerShell Style Guide was originally created by Carlos Perez, for his students, and all the good parts were written by him.

Portions copyright (c) Carlos Perez, 2015

Any mistakes in either of these documents are there because Joel Bennett got involved. Please submit issues and help us correct them.

Portions copyright (c) Joel Bennett, 2015

powershellpracticeandstyle's People

Contributors

1redone avatar agabrys avatar baileylawson avatar dan-escott avatar darkoperator avatar dzampino avatar eedrah avatar ev3rl0ng avatar halkcyon avatar jaykul avatar jdgregson avatar kenyon avatar kirkmunro avatar leesoh avatar markekraus avatar mattmcnabb avatar mrbodean avatar nevember avatar nitroevil avatar nveron avatar obscuresec avatar pauby avatar richy58729 avatar ryan-p-walsh avatar ryanspletzer avatar s-t-s avatar sryabkov avatar szeraax avatar thomasrayner avatar zanedp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

powershellpracticeandstyle's Issues

Add Section on Code Attribution.

I have found it difficult to find examples on how to properly attribute other's code when they have given permission to reuse. I think it would be nice to have a section highlighting best practices for this. I don't have a developer background and I think others in a similar situation would find it beneficial as well.

Profile scripting best practices

I'm not sure where this belongs or if it even belongs in this repo but... Most of us probably have pretty complex profile scripts. My PSReadline custom configuration alone is over 200 lines. So I've taken to moving those "module-specific" configurations out into a separate file that sits by my profile.ps1 file. It is called "PSReadline_config.ps1" - I've seen this approach with Bash (.bash_aliases). I was thinking something like <ModuleOrDescription>_config.ps1. Then in your profile script you just dot source it in like so:

. $PSScriptRoot\PSReadline_config.ps1

This has made it easier for me to share my PSReadline config with others.

BTW there is probably some best practices around "which" profile scripts to use. I'm old school and use the current user all hosts script and for PSReadline I check against the host name. I should probably use the console's profile Microsoft.PowerShell_profile.ps and ditch the if ($host.Name -eq 'ConsoleHost') check.

Add section on variable style

It may be in general guidelines, but I could not find a general guideline on defining variables and I think there should be a section for this.

Is it best to use camelCase?
Is it best to UpperCase each word?

What is the general community view?

Avoid New-Object Hashtable

When you create a hashtable with New-Object instead of the literal initializer @{ } the resulting hashtable is case sensitive

Consider having file-per-section or rule

Having multiple files would make merging and change tracking easier, and if people start making "customized" versions, it would make it easier for them to isolate the changes that nobody else wants.

ValidatePattern should be avoided in some (many?) cases

avoid validating parameters in the body of the script when possible and use
parameter validation attributes instead.

The ValidatePattern attribute specifies a regular expression that is compared
to the parameter or variable value. Windows PowerShell generates an error if
the value does not match the regular expression pattern.

From my experience ValidatePattern is a feature to be avoided when possible,
in contrast to the guidelines. It's regular expressions and end users are not
necessarily experts. But PowerShell will show them just regular expressions in
error messages, nothing else. Even simple expressions like ^\w+$ are cryptic
for some people. Compare with "should contain alphanumeric and underscore
characters."

P.S. I agree with other parameter attributes, they are good. ValidateScript
also has some issues (see here) but unlike with ValidatePattern they can be
resolved with using throw <friendly message>.

Should Parameters Always Have Default Values?

I've already written about this, but it's probably worth having this conversation here.

Function parameters are always automatically defined in the local scope and initialized by PowerShell to their default value (the equivalent of null -- which comes out as an empty string or a zero for numerical values, or a default struct, etc).

This happens even when they are part of an unused parameter set.

There is, in my opinion, nothing to be gained in setting them yourself -- unless you actually need them to default to something other than $null.

If anything, this should be warned against for performance reasons, and because people frequently provide default values on mandatory parameters (which can be misleading and even invalid defaults, since they're not used).

As with C# and most other programming languages, initializing a variable to it's default value is just extra work that accomplishes nothing. It might even make your script (infinitesimally) slower, whilst changing nothing.

Best Practices and Style Guide: One Project or Two

We are taking over the Best Practices book, in addition to this style guide, so the question is: shall I

  1. Make folders in this project for the Style Guide and Best Practices?
  2. Move all the Best Practices stuff to a new project?

Best Practices for Module Developement and Building

Hey, anyone interested in trading success stories about how you "build" modules?

I just tried something new this week on the SecretServer module, thanks to @bushe and @RamblingCookieMonster ... where the functions in the module are organized in "Public" and "Private" folders, and the psm1 dot-sources them, but the build script combines them, copying all the content into the psm1 -- so when it's shipped, the module is just the .psd1 and the .psm1

The result is somewhat easier for code navigation and debugging during dev (at least in Visual Studio, Code, and Sublime) and faster loading of the 'built' module.

I like it so much, I'm wondering if it's worth teaching others to do the same...

The build script for that is something that's gone through many revisions on other projects, and I'm starting to wonder if there's a way we can ever stop all the projects on github from having their own unique build/test systems.

See Also: ModuleBuilder, PSake, PSDeploy, CodeFlow, Pester, etc...

Should the Style Guide recommend against non-advanced functions?

https://github.com/PoshCode/PowerShellPracticeAndStyle/blob/master/Style%20Guide/English.md#functions

I think the Style Guide should just say:

Prefer CmdletBinding because functions without CmdletBinding don't get common parameters and therefore do not behave the way users expect them to. For instance, when you call them with common parameters (like -Verbose or -WhatIf or -ErrorVariable), those parameters don't work as expected either.

I could add a few other reasons, mostly around the inconsistency and the fact that you frequently have to upgrade functions to CmdletBinding which changes their syntax, but my point is that I think the style guide and best practices should strongly recommend writing advanced functions, rather then providing style suggestions for them.

Anyone want to defend the use of non-advanced functions?

Would tangential topics (e.g. re: research and write-ups of projects) make sense here, as practices?

Hi!

TLDR: Would it make sense to include topics tangential to code, but important to code writers here?

Examples (whether you agree or not):

  • Conduct a mini "literature review" before undertaking and releasing a project. Are there similar projects out there? Is there a reason to create your own, rather than contributing to these? Did you borrow ideas or code from these? Consider listing these at a minimum in any blog posts, readmes, and help content, and ideally, explain scenarios where your solution is a better fit.
  • Reduce dependence on the current state and time. Include a synopsis of your operating environment such as PowerShell version, operating system (if applicable), module versions, etc. When linking to content that may change, attempt to use a static link: for example, when linking to content in GitHub or a similar site, specify a specific commit, rather than the generic "latest" link.

etc.

Granted, these aren't PowerShell specific, so maybe this isn't the best spot? Thoughts?

Cheers!

(re)move detailed recommendations about validation attributes

https://github.com/PoshCode/PowerShellPracticeAndStyle/blob/master/Style%20Guide/English.md#when-using-advanced-functions-or-scripts-with-cmdletbinding-attribute-avoid-validating-parameters-in-the-body-of-the-script-when-possible-and-use---parameter-validation-attributes-instead

I think the style guide should just have the text of this headline and a link to Best Practices for using validation and TypeAdapter attributes on parameters.

When writing functions or scripts, avoid validating parameter values in the body of the script, and yse parameter validation attributes instead when possible (See Best Practices for Parameters).

Then we should ensure that in addition to linking to the help file on these parameter attributes, we provide more information than the help files do:

  • Recommend (when to) use AllowNull, AllowEmptyString, AllowEmptyCollection and why.
  • Recommend avoiding ValidatePattern
  • Recommend always throw from ValidateScript to control the error message
  • Any others?

Anyone have an objection to simplifying the style guide like that, or ideas about additional points for best practices?

Do you (all) still care about non-style Best Practices?

I've been finding it hard to draw the line on style and "best practices," because obviously good style is a best practice ๐Ÿ˜€

I'm thinking about using this as a jumping off point to finally write what was originally going to be a series of blog posts that were going to look roughly like this:
https://github.com/PoshCode/PSStyleGuide/blob/master/ExampleExample.md

The idea was to take a single "Best Practice" and write a series of "DO" or "DO NOT" posts for it, using the pattern:

Best Practice:

Rule: DO (or DO NOT) ...

Except when ...

Instead ...

Because ...

Example:

Notes:

And then index them all and cross-tabulate and some junk, maybe make an ebook out of it. But then came the PowerShell Best Practices book from Ed Wilson (which is, frankly, the worst named PowerShell book I know of), and the "Community Practices_ ebook from PowerShell.org, and I've still never gotten around to writing them.

So anyway, the question is: is anyone interested in fleshing out the Best Practices stuff that's already in the document here, or the thoughts that are in TODO.md ... or should we trim some of that stuff out and keep this firmly a style guide (ala PEP8)

Capitalization of language statements in code: for example, if or If?

Thanks for these style guidelines.

Do you have guidelines here for capitalization of language statements in code?

For example, what does the community consider best practice: if or If?

I've read, among other web pages, the following Microsoft guidelines:

which strongly favor Pascal case in various situations, but I've not found anything that specifically refers to language statements.

The Microsoft PowerShell documentation for the If statement (after initially characterizing it as a "language command") refers to the If statement (with an initial capital letter) in prose, but uses if (all lowercase) in code.

Request for comments

I found these style guidelines only after uploading my first non-trivial script at:

https://github.com/unsoup/validator/blob/gh-pages/tools/Get-vnu-Schemas.ps1

Feel free to rip into that script by creating issues. Although, I personally baulk at 4 spaces per indent,
and I've deliberately chosen to use back ticks in a couple of places.

I'd particularly welcome feedback on a best-practice replacement for | Out-Null.

Right now, function (or script) parameter names in that script are Pascal case (uppercase first letter), whereas variables (that are not parameters to functions or to the script) are camel case. I think I'm going to ditch that distinction and just use Pascal case for all of them

Nit

Typo in the style guide intro:

just recomendations

needs another "m".

RegEx part

It really would be nice to have a RegEx part which helps with some traps like:

  • clarify differences between -match, -imatch, -cmatch
    • differences while using modifiers like (?-i)
  • best practice for case sensitive patten
    • when to use -cmatch
    • or -match with modifier like (?-i)
    • -match in combination with char exludes like [A-Z^a-z0-9]{1,3}

These are just some examples which took me some time to understand when i started using RegEx patterns with powershell.

Finally just an example I did when i stated working with it:
2016-10-27 10_07_49-powershell admin

Blank Lines

What is the reasoning behind the statement "End each file with a single blank line?" I haven't seen this as a best practice before. If there is a solid reason, then that reason should be stated.

Suggestions for improvements/discussions

Hi.

Note: I really looked into the exisiting tickets before posting, but I might have overlooked something and included something you already discussed

I am really exited about this repo and already decided to change some stuff I do when coding.
While thinking about how I code stuff I thought of these situations and am looking forward to your opinion on them:

  • 1 line code blocks
    Altought an article arlready suggests having { in a dedicated line, I do get annoyed with block like:
if ($foo -match "^\d+$")
{
    $bar = $false
}

I write it like:

if ($foo -match "^\d+$")
    { $bar = $false }
  • Documentation Comments relative to function
    The section DOC-01 Write comment-based help already ilustrates this, but I think a section to "where to put the comment" would be nice.
<#
    .DESCRIPTION
        Lorem
#>
function do-stuff
{
    param()
    process
    {
        "bar"
    }
}

vs

function do-stuff
{
    <#
        .DESCRIPTION
            Lorem
    #>
    param()
    process
    {
        "bar"
    }
}

vs

function do-stuff
{
    param()
    process
    {
        "bar"
    }
    <#
        .DESCRIPTION
            Lorem
    #>
}
  • How to make param mandatory
    I don't like it, that Get-Help throw returns

...
USING THROW TO CREATE A MANDATORY PARAMETER

You can use the Throw keyword to make a function parameter mandatory.

This is an alternative to using the Mandatory parameter of the Parameter
keyword. When you use the Mandatory parameter, the system prompts the user
for the required parameter value. When you use the Throw keyword, the
command stops and displays the error record.

For example, the Throw keyword in the parameter subexpression makes the
Path parameter a required parameter in the function.

In this case, the Throw keyword throws a message string, but it is the
presence of the Throw keyword that generates the terminating error if
the Path parameter is not specified. The expression that follows Throw
is optional.

 function Get-XMLFiles  
 {  
     param ($path = $(throw "The Path parameter is required."))  
     dir -path $path\*.xml -recurse | sort lastwritetime | ft lastwritetime, attributes, name  -auto  
 }  

Maybe spend a # BAD example on this in When using advanced functions or scripts with CmdletBinding attribute avoid validating parameters in the body of the script when possible and use parameter validation attributes instead.?

  • -verbose / -debug
    Best Practice on what kind of information should be return in Write-Verbose and Write-Debug ?
  • [Alias()]
    Any best practices for [Alias()]?
    I don't like code where an alias is defined just to shorten a parameter name
# GOOD
param(
    [Alias("cn")]
    $computerName
)

# BAD
param (
    [Alias("comp")]   # as this would be usable without this line anyway
    $computerName
)
  • Unit Tests
    Any recommendations on this?
    Shouldn't a .\Tests\ folder make sense in a Module?
  • Module Folder Structure
    I see no use of making a single of .ps1 file for each function in a module with 2 - 3 functions. But where is the limit?
    Any recommendations in the structure?
   .\
      ModuleName.psd1
      ModuleName.psm1
      Functions\
                       Foo-Bar.ps1
                       Bar-Foo.ps1
      Dlls\
      [...]
  • How to build a proper .psm1 file
    Any best practice on how to load .ps1 files in a module?

I currently use

Get-ChildItem -Path $PSScriptRoot -recurse | Unblock-File
Get-ChildItem -Path $PSScriptRoot\*.ps1 -recurse | Foreach-Object{ . $_.FullName }

Line Length 80 Characters for Command Prompt?

I know the formatting guide says to stick to 115 characters b/c the PowerShell host defaults to 120, with some padding in there, etc.

Should we be concerned about the built-in command prompt, which is 80 characters wide by default?

Maximum line length indicator

Does anyone know of an add-on for the ISE that will display an indicator of some sort on a specified column? Without that, I don't think it's a good idea to include a maximum line length item in the style guide. While other editors such as Sublime Text offer this, the ISE is pretty much the de facto script editor for the majority of the PowerShell community.

Sure, you can look for the tiny "Line #, Col #" text in the status bar, but it's annoying to have to constantly shift your eyes down there to know whether you've crossed the boundary.

Inconsistencies or not mentioned special cases

You should us a single space around parameter names and operators, including
comparison operators and math and assignment operators, even when the spaces
are not necessary for PowerShell to correctly parse the code.

Used examples, for instance this

parameter(Mandatory=$true)

does not have spaces around "=". Personally, I also follow the rule "spaces
around operators" except: a) attribute values; b) parameter default values.
Perhaps the guidelines should mention these cases.

Nested expressions $( ... ) and script blocks { ... } should have a single
space inside them to make code stand out and be more readable.

This used example

[ValidateScript({$_ -ge (get-date)})]

does not use spaces inside the script block. Perhaps it is a special case when
this is appropriate. Then this should be mentioned.

Type accelerators casing

I love this project. I obsess over code formatting (not in a good way). Anyway, one thing I can't find mentioned is how to handle type accelerators with regards to casing and fully-qualified or not. For instance:

string or String or [System.String] ?

What I'm doing now is that for parameter declaration I use lowercase:
string$InputString

And:
$InputString -is string

but, I title case it when I am calling a method:

I have no idea why I chose that style and I'm not necessarily consistent with it across projects.

And, for those types that don't have an accelerator listed (System.Object, System.Math), I typically use either the title case or the fully qualified. Not sure why I treat them differently since [object] and [math] are certainly valid, I just do and blame MS for the lack of a listed accelerator for it (instead of blaming my OCD). sigh

Any thoughts/opinions on this one?

Write-Output vs. Return vs. $Output

There is currently a recommendation in Function and Structure to not use "return" but it says to "just put the variable on a line by itself" ...

I wonder if that's the majority opinion?

Assuming this is the last line of my function, what's better:

$output = $temp + (Get-Thing $temp)
return $output
$output = $temp + (Get-Thing $temp)
$output
$output = $temp + (Get-Thing $temp)
Write-Output $output

I won't go into piping output to Write-Output ;-)

DISCUSSION GUIDELINES

@darkoperator this is particularly for you and I as named and blamed stakeholders ;-)

Can we all agree on this (or something like it) as a statement of purpose to guide our discussions?

I realized this week that I've written a few things in the "must" and "require" and "forbid" style, and ended up in endless arguments -- I apologize to everyone for that, but I want to present something up front (in CONTRIBUTING.md) to help us all get the right tone (and to give people something to point at when they have to tell me I'm doing it wrong).

Purpose

PowerShell Best Practices are what you should usually do as a starting point. They are ways of writing, thinking, and designing which make it harder to get into trouble. The point of a Best Practice is to help the reader to fall into the pit of success:

The Pit of Success: in stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our platform and frameworks. To the extent that we make it easy to get into trouble we fail.

-- Rico Mariani, MS Research MindSwap Oct 2003.

Like English spelling and grammar rules, PowerShell programming best practices and style rules nearly always have exceptions, but we hope to document a baseline for code structure, command design, programming, formatting, and style which will help you to avoid common problems, and even help you write more reusable, readable code, because reusable code doesn't have to be rewritten, and readable code can be maintained.

Having said all of that, if you're having trouble getting something done because you're trying to avoid breaking a style or best practice rule, you've misunderstood the point: this document is pragmatic, rather than dogmatic.

Tone

One of the goals as we rewrite these documents is to make it easy to agree with them. As a result, we will avoid absolute language, and will encourage writing the proactive and positive guidelines rather than negative ones. Words like "always" and "never", "must" and "forbid" should be avoided when possible in favor of words like "usually", "normally", "should" and "avoid".

The points in the Best Practices documents and the Style Guide are referred to as practices and guidelines, not rules. That said, we should not shy away from making recommendations whenever they will make it easier to be successful, harder to fail, or easier for someone else to pick up the code later, but we should explain the rationale: and should particularly note when the reason is readability and maintainability, performance, or security, rather than just being a smoother path.

Knowing that there are cases where other patterns may be acceptable or even necessary is not a reason to exclude a practice that is correct most of the time. Rather, each practice or guideline should include the rationale, an example case, and when appropriate, counter examples and notable exceptions.

Organization

The guidelines will be divided into these sections:

  • Style Guide
    • Code Layout and Formatting
    • Function Structure
    • Documentation and Commenting
    • Readability
    • Naming Conventions
  • Best Practices
    • Naming Conventions
    • Building Reusable Tools
    • Output and Formatting
    • Error Handling
    • Performance
    • Language, Interop and .Net
    • Metadata, Versioning, and Packaging

Markdown documents on GitHub support linking within a document, but only to headers, so when editing, in addition to keeping practices and guidelines in the documents where they make sense, please use headlines for each guideline, and lower level headlines for rationale, examples, counter examples, and exceptions.

If you can't figure out where something should go, open an issue and we'll discuss it. If there are enough points which don't fit well in the current sections, we may open a new section.

Reminder:

The PowerShell Best Practices and Style Guide is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

You are free to:

Share โ€” copy and redistribute the material in any medium or format

Adapt โ€” remix, transform, and build upon the material

The authors encourage you to redistribute this content as widely as possible, but require that you give credit to the primary authors below, and that you notify us on github of any improvements you make.

Where to put braces

Option One: Same Line

function Get-Noun {
    end {
        if($Wide) {
            Get-Command | Sort-Object Noun -Unique | Format-Wide Noun
        } else {
            Get-Command | Sort-Object Noun -Unique | Select-Object -Expand Noun
        }
    }
}

Option Two: New Line

function Get-Noun
{
    end
    {
        if($Wide)
        {
            Get-Command | Sort-Object Noun -Unique | Format-Wide Noun
        }
        else
        {
            Get-Command | Sort-Object Noun -Unique | Select-Object -Expand Noun
        }
    }
}

I deliberately left out the param block just to make sure we don't get distracted and argue about it.

There is no One True Brace Style

There are many brace and indent styles in programming, but in the PowerShell community, there are essentially three:

  • BSD/Allman style
  • K&R/OTBS
  • Stroustroup

I've briefly given an example and explained the rationale for each below
Feel free to comment, but please vote for your favorite by using the ๐Ÿ‘
And of course, if there's one you can't stand, feel free to give it a ๐Ÿ‘Ž

Splatting vs. backticks

The preferred way to wrap long lines is to use splatting (see
About_Splatting) and PowerShell's implied line continuation inside
parentheses, brackets, and braces -- these should always be used in
preference to the backtick for line continuation when applicable.

I slightly disagree with splatting, especially with the word "preferred". It is
a possible way, not preferred.

Splatting avoids backticks, this is probably a good thing. But it introduces
some drawbacks, too. It would be nice if the guideline mentions them, so that
people understand the consequences and make their choice being well informed.

The main drawback is absence of code completion of parameters and even some
parameter values on using the splatting approach. In contrast, TabExpansion
works fine across multiple lines with backticks. TabExpansion is a real time
saver.

Another drawback is necessity to introduce a variable because unfortunately
splatting requires a variable. It is not a big deal but it is just not natural.

Yet another minor thing is understanding the code. When a statement starts with a
command name then it's clear what it is even if it has lengthy continuation
with backticks. In contrast, if a statement starts with a lengthy hashtable
assigned to a variable then without looking at the command after it it is
less clear why is this needed.

ForEach vs ForEach-Object

In your performance document you mention that:

$content = Get-Content file.txt

ForEach ($line in $content) {
  Do-Something -input $line
}

is "However, this approach could offer extremely poor performance. If file.txt was a few hundred kilobytes, no problem; if it was several hundred megabytes, potential problem. Get-Content is forced to read the entire file into memory at once, storing it in memory (in the $content variable)."

Now, this is inaccurate because the other way you mention ( | foreach-object) is actually exponentially slower due to serialization (this adds both cpu cycles and even MORE memory usage in total overall) of the data in the pipeline.

Here is some data, using your own example that proves my point. This is a get-content on a ~50MB file (java installer)

PS D:\Users\Kelcey.pixelrebirth\Downloads> measure-command {
>>> $content = get-content $pwd\jre-8u91-windows-x64.exe
>>>
>>> ForEach ($line in $content) {
>>>   $line.length
>>> }
>>> }


Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 11
Milliseconds      : 466
Ticks             : 114668385
TotalDays         : 0.000132718038194444
TotalHours        : 0.00318523291666667
TotalMinutes      : 0.191113975
TotalSeconds      : 11.4668385
TotalMilliseconds : 11466.8385



PS D:\Users\Kelcey.pixelrebirth\Downloads> measure-command {
>>> $content = get-content $pwd\jre-8u91-windows-x64.exe |
>>> ForEach-Object -Process {
>>>   $_.length
>>> }
>>> }


Days              : 0
Hours             : 0
Minutes           : 1
Seconds           : 3
Milliseconds      : 36
Ticks             : 630365204
TotalDays         : 0.000729589356481481
TotalHours        : 0.0175101445555556
TotalMinutes      : 1.05060867333333
TotalSeconds      : 63.0365204
TotalMilliseconds : 63036.5204

Incidentally, the StreamReader was ~10 seconds slower than the foreach ($line in .. example....

PS D:\Users\Kelcey.pixelrebirth\Downloads> measure-command {
>>> $sr = New-Object -Type System.IO.StreamReader -Arg $pwd\jre-8u91-windows-x64.exe
>>>
>>> while ($sr.Peek() -ge 0) {
>>>    $line = $sr.ReadLine()
>>>    $line.length
>>> }
>>> }


Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 21
Milliseconds      : 490
Ticks             : 214900262
TotalDays         : 0.000248727155092593
TotalHours        : 0.00596945172222222
TotalMinutes      : 0.358167103333333
TotalSeconds      : 21.4900262
TotalMilliseconds : 21490.0262

As per you wanting to start a discussion before re-writing a large portion of the guide I am starting the discussion. Id love to update the document to explain this phenomenon and share any sites I can re-find on the topic.

That being said a large enough file to fill up your memory would likely HAVE to be done with the pipe technique else you will lag powershell/the system, ([gc]::collect() in the foreach loop helps, but still starts to hang the machine using upwards of 98% memory)... but the piped way is never going to be FASTER--- Maybe this memory issue is what you meant by performance--- to me that has a very different meaning.. this would need to be clarified in the write up as well.

I truly want to contribute to this, more from me will come in the next few weeks as I write the powershell style guide for my new shiny employment opportunity. (I will be using yours as a baseline and adding several things-- hoping some of them can make it into an accepted PR)

Brace/Capitalization Guidelines?

Since PEP-8 is specifically called out (and seems to be heavily influencing this document), do we want to develop brace/capitalization guidelines? PowerShell doesn't care, but it's often a big pet-peeve of developers, and I've seen a ton of different styles out there in the wild. Personally, I'm an OTB fan and, for PowerShell at least, I CamelCase everything since that's the Cmdlet naming convention.

Is there actually a best way to handle errors?

https://github.com/PoshCode/PowerShellPracticeAndStyle/blob/master/Best%20Practices/err-05-avoid-testing-for-a-null-variable-as-an-error-condition.md

First of all, let's not have a misleading discussion. The original guideline uses a bad example. Get-ADUser throws an exception when you use the -Identity parameter and it can't find anything. There's obviously no value in writing a warning if you didn't suppress the error, and it's equally silly to let an exception spew and then still test for null -- you need to either suppress the error or use it instead of testing for null.

Let's try a different example. I have a module which requires the user to configure multiple connection strings and a license before it can continue. Imagine that it was this simple to verify that you had, in fact, configured it:

$Config = Get-Content $ConfigurationFile

This will cause an error and an empty variable if the ConfigurationFile doesn't exist, but does not throw (it's not terminating) by default. There are a few ways I could handle that in PowerShell.

For what it's worth: the pythonic way is always to just charge ahead, never test, and just deal with the exceptions (or let them output, if they're clear enough on their own). The old C/C++ way is to always check the hResult, there are no exceptions. The Java and C# way usually involve handling exceptions, but not quite as aggressively as Python, you would rarely force an exception to happen if you could avoid it by a quick test.

So which of these should be recommended? Incidentally, for the sake of this discussion, please imagine my throw statement to be your preferred way of exiting a function with extreme prejudice.

Avoid the error:
if(Test-Path $ConfigurationFile) {
    $Config = Get-Content $ConfigurationFile
} else {
    # We could write a warning and return, but for consistency:
    throw "You must configure MyModule first, please call Initialize-MyModule"
}
<# Do the work #>

Of course, if it's best practice to avoid errors when possible, you still have to have a best practice for dealing with them, because it's not always as easy as Test-Path to avoid them.

Suppress the error and check the output:
if($Config = Get-Content $ConfigurationFile -ErrorAction SilentlyContinue) {
    <# Do the work #>
}
# We could have just done ... nothing, but for consistency:
throw "You have to configure MyModule first, please call Initialize-MyModule"

Or I could write that backwards:

if(!($Config = Get-Content $ConfigurationFile -ErrorAction SilentlyContinue)) {
    throw "You have to configure MyModule first, please call Initialize-MyModule"
}
<# Do the work #>
Force an exception and catch it
try {
    $Config = Get-Content $ConfigurationFile -ErrorAction Stop
} catch { 
    # Normally you'd be using some information from the exception, but for consistency
    throw "You have to configure MyModule first, please call Initialize-MyModule"
}
<# Do the work #>
Deal with the error itself
$Config = Get-Content $ConfigurationFile -ErrorAction SilentlyContinue -ErrorVariable NoConfig

if($NoConfig) {
    # Normally you'd be using some information from the error, but for consistency
    throw "You should configure MyModule first, please call Initialize-MyModule"
}
<# Do the work #>

Would organization of Modules be in scope?

Would it be worth including bits on organizing files and general structure for modules? For example:

  • Do functions get their own files? If so, how to name these files? If mixed, when to separate these into individual files? My take: Regardless of lines of code, I prefer to separate every function into it's own file
  • Is there a preferred / 'best practice' for organization? For example:
    • Repository Root
      • Tests
        • Integration
        • Unit
      • ModuleName
        • ModuleName.psd1
        • ModuleName.psm1
        • \Public - Public functions in here (or at root?)
        • \Private - Private functions in here
        • \lib - Optional folder for libraries
        • \bin - Optional folder for binaries

I could see this spiraling out of control, and most (or all) of it would be subjective, so might not be appropriate here.

Cheers!

A question of configuration

Does anyone else use (external) configuration files for their modules?
How do you wish PowerShell supported that?

I looked at a bunch of modules which need configuration (like PSReadLine, PSCX, AzureRM, PowerShellGet, PSScriptAnalyzer) and every one of them is using a different way to deal with it. Most of them require you to configure them with their own commands, so you can't configure them until after you import them (and configuring them in your Profile.ps1 would force you to import them). A few use environment variables, global variables, or XML or JSON configuration files.

It seems to me that we can do better: there's an opportunity here for an RFC to the PowerShell Core project to either add a setting, or an event, or a set of configuration commands.

The idea is to create a simple common convention for a way to customize modules on import (or run initialization code after import). The requirements are:

  • The initialization must always get run, regardless of how the module is imported.
  • Configuration must not inadvertently import the module.

Here are my two main ideas, along with an old suggestion I was reminded of earlier today... what do you all think? Is there a better way?

  1. A Module Initialization Script

Several modules (notably AzureRM.Profile) use a "startup" script which is run by the module (within the module scope?) when the module is being loaded. Others don't have startup scripts, but still rely on cmdlets for all their customization (notable PSReadLine).

Azure is using the IModuleAssemblyInitializer interface and subtly abusing it to run a local script file for binary modules. However, we could run it any number of ways, and we could establish a pattern for the file name like "$PSHome\modulename-profile.ps1" ...

In an ideal world, we would add a setting for the module manifest, and PowerShell would automatically run that script at import time (as though it were dot-sourced at the end of the module psm1?).

  1. Module configuration

I wrote the Configuration Module to allow modules to store their settings in layered (i.e. default + machine + user) PSD1 data files that are stored in your AppData folders. This allows configuration files to be manipulated using commands from the Configuration module without worrying about importing the module you're configuring (potentially before you even have it installed).

I had to write some code to make serialization to and from psd1 work (we could use JSON, but I want more type safety and such). and I also made some assumptions about what people want/need (layering, file storage in AppData, etc).

In an ideal world, we would have these commands built-in to PowerShell, and the configuration would be automatically imported and exported on module load/unload, and would populate a magic variable $PSModuleConfig

  1. Module Load Event

This would not work "down level" but hypothetically, we could add a "ModuleImported" event to PowerShell 6 that would trigger (with a module name) do that someone could just write (in their main profile script) something like this:

Register-EngineEvent ModuleImported {
    switch($_) {
        "PSReadLine" { <# customize module #> }
    }
}

This would be interesting because it allows you do any sort of logic you want, and would therefore be backward compatible to old modules that have existing configuration or setting commands, even though it couldn't ever work with old versions of PowerShell.

The down side, of course, would be the lack of discoverability, and the fact that it wouldn't force authors to think about how they want configuration to work ;-)

Thoughts, comments, rejections?

Command Prefixes

Should we be embedding Command Prefixes in function names, or using the PSD1 to specify the default?

I'm going to start this debate by saying that I think putting prefixes into the code is wrong.

  1. You break the -Prefix feature on Import-Module (nobody wants two prefixes)
  2. You break discoverability, because nobody can Get-Command -Noun and find your commands (because your noun is "ADUser" instead of "User" and "ADGroup" instead of "Group").

The counter argument seems to be that "ADUser" and "ADGroup" are actually nouns, not just nouns with a module prefix, and I guess I would be OK with that -- but that should be the distinction you make, and the bar you set for hardcoding a prefix: the prefixed noun should be so obvious that it's discoverable, and that all of your users will intuitively know to search for it. E.g.; "ADUser" might be ok, but "QADUser" isn't ๐Ÿ˜‰

Help section in Best Practices missing

With PS Conference EU so strong in mind, and the evangelizing of June Blender about writing help, I'm missing a section about writing help. How about I ask June to work together with me on this?

ERR-01 and ERR-02 implies that you know all commands

In ERR-01 and ERR-02 you're told to use -ErrorAction on all cmdlets and set ErrorActionPreference around everything else, where needed.

This leaves it to the scripter to figure out how every command emits its errors and how they're emitted. I believe this is impossible.

Personally I put this in the first two lines of the begin block in every advanced function I create:

Set-StrictMode -Version "Latest"
Set-Variable -Name "ErrorActionPreference" -Scope "Script" -Value "Stop"

With these two lines you will fail fast, hard and consistently (the users ErrorActionPreference will not matter).

I write Python quite often and would like to quote parts of The Zen of Python:

Errors should never pass silently.
Unless explicitly silenced.

I believe any PowerShell script also should behave this way. Silent errors can and will lead to nothing but confusion and frustrating debug situations.

Capitalization guidelines

So, there was a useful post by @Jaykul on #17 that got lost amid some debates that were not relevant to that specific discussion. This just came up again for me while watching a live demonstration where I realized I really didn't like the capitalization guidelines they followed, so I thought I'd try to refocus the discussion on this since we never did come to a consensus.

Here is Jaykul's post on this (slightly modified to remove points of contention that derailed the other issue), copied so that you don't have to go look it up:

Since nobody else is going first, here are my thoughts. Any disagreement?

Keywords (try, catch, foreach, switch)
lowercase (rationale: no language other than VB uses mixed case keywords?)

Process Block Keywords (begin, process, end, dynamicparameter)
lowercase (same reason as above)

Comment Help Keywords (.Synopsis, .Example, etc)
PascalCase (rationale: readability)

Package/Module Names
PascalCase

Class Names
PascalCase

Exception Names (these are just classes in PowerShell)
PascalCase

Global Variable Names
$PascalCase

Local Variable Names
$camelCase (see for example: $args and $this)

Function Names
PascalCase

Function/method arguments
PascalCase

Private Function Names (in modules)
PascalCase

Constants
$PascalCase

I have doubts about Comment Help Keywords, and about is Constants.

Comment help keywords have frequently been written in ALLCAPS to make them stand out more (the same is true for the process block keywords) one need only search poshcode to see examples of this. Personally, I think that if the indentation rules are followed, all of these keywords would be indented one level less than everything else around them, so there is no reason to capitalize them to make them stand out.

Constants are basically global variables which should not be changed. Hypothetically we can enforce non-changing on them. Note that Microsoft has several such constants and follows NO CONVENTION AT ALL: $Error, $ExecutionContext, $PID, $PSVersionTable, $ShellId, $Host, $PSHOME, $true, $false

I would argue that true and false are special, and that other than that, these are all PascalCase except $PSHOME which obviously should be $PSHome ๐Ÿ˜‰ -- Hypothetically I'd be willing to use some other naming convention (like $ALL_CAPS or $Under_Score) but they would forever clash with the constants Microsoft has created ;-)

I am 100% in agreement with all of this (now that I took the distracting parts out ๐Ÿ˜‰).

The only place where I have any question here is around case sensitivity with acronyms, which was not called out. According to the Microsoft standard for PascalCase and camelCase, if an acronym is two letters then you don't change the case, but if it is three or more then you do. That results in command names that look like this:

Get-AzureVMDscExtension
Get-AzureVMDscExtensionStatus
Publish-AzureVMDscConfiguration
Remove-AzureVMDscExtension
Set-AzureVMDscExtension

My brain has a really hard time delineating between VM and Dsc in those command names. I always see VMD sc. Personally, I follow these same PascalCase/camelCase rules, but for acronyms I always put every character beyond the first as lowercase, to make the identifiers more readable. For example, I would have named all of the above commands like this:

Get-AzureVmDscExtension
Get-AzureVmDscExtensionStatus
Publish-AzureVmDscConfiguration
Remove-AzureVmDscExtension
Set-AzureVmDscExtension

If I was to place an exception on anything related to acronyms, it would be PS, because, well, PowerShell. Either that or I would modify Microsoft's rule, indicating that you only maintain case on an acronym if the acronym is 2 characters long, and when maintaining case any acronyms immediately after that one must use the same case. With that, the commands would instead look like this:

Get-AzureVMDSCExtension
Get-AzureVMDSCExtensionStatus
Publish-AzureVMDSCConfiguration
Remove-AzureVMDSCExtension
Set-AzureVMDSCExtension

There are examples of this in native PowerShell cmdlets as well, but for those examples, the acronym is at the beginning of the noun which seems easier to identify than when it is in the middle with mixed case acronyms side by side.

Anyway, we should explicitly decide how we want acronyms to be cased when using Pascal or Camel case. My vote would be to always make every character after the first in acronyms lowercase (like I show in the second example above), but I'll follow whatever the community decides, and I realize it would probably be best if we simply followed Microsoft's guidelines for consistency here (even if I don't like them).

So, to keep it simple, two questions:

  1. Are there any objections to the proposed casing for the various PowerShell elements identified above?
  2. Do you think we should follow Microsoft's guidelines for acronyms wrt Pascal and Camel case or is that some horrible Kool-Aid that we should throw away in favor of always lowercasing every character in every acronym beyond the first when applying Pascal or Camel case to these elements?

the WAST-02 Report bugs to Microsoft section is outdated

The section WAST-02 Report bugs to Microsoft instructs users to report bugs to Connect.Microsoft.com. However the Powershell page on Connect redirects users to Powershell UserVoice based on this blog post.

And to complicate matters more that blog post contains a recommendation to submit feedback for our open-source projects via GitHub Issues.

Please update the guide with the correct or at least recommended location to report issues.
And if more than one issue tracker is being actively used please explain when to use one or the other.

Avoid backtick splatting example could be improved

The text lists this example:

$params = @{Class=Win32\_LogicalDisk;
      Filter='DriveType=3';
      ComputerName=SERVER2}

Note that the semicolons are unnecessary and it is missing quotes. This works:

$params = @{ Class = 'Win32\_LogicalDisk'
             Filter = 'DriveType=3'
             ComputerName = 'SERVER2'}

Using single quotes around strings that don't require expansion/evaluation

I searched and didn't see a previous issue for this. Do you think we should recommend using single quotes around any string that doesn't require some type of evaluation?

I'd propose that if a string is just a string, it should be surrounded by single quotes:

$testVariable1 = 'This is a plain string.'
$testVariable2 = 'This is a string containing a $variable I don't want evaluated'
$testVariable3 = "This is a string containing a $variable I want evaluated"

as opposed to:

$testVariable1 = "This is a plain string."
$testVariable2 = 'This is a string containing a $variable I don't want evaluated'
$testVariable3 = "This is a string containing a $variable I want evaluated"

I think this improves readability and is more consistent. Also, while the difference is very minimal, single quoted strings generally perform faster than double quoted strings when there is no expansion/evaluation.

A bit more info:

Formatting these documents

I wrote the Contributing guidelines before we pulled in the Best Practices book, and I've been reviewing that book and feel like it's broken up way too much (especially considering many of the documents are nothing but a headline, less than a whole paragraph).

https://github.com/PoshCode/PowerShellPracticeAndStyle/blob/master/CONTRIBUTING.md

I'm thinking perhaps the best solution for the sake of consistency and readability is to break up the Style Guide by it's top-level categories, and unify the Best Practices on it's top-level categories, one file each:

  • Style Guide
    • Code Layout
    • Commenting
    • Naming Conventions
    • Function Structure
    • Security
    • Metadata (Supported Versions)
  • Best Practices
    • Readability
    • Documentation (Commenting)
    • Metadata (Supported Versions)
    • Error Handling
    • Output
    • Performance
    • Function Structure (aka "TOOL" - writing good, reusable functions)
    • Interop (aka "PURE" -- PowerShell vs. .Net vs. Native)
    • Waste

Any problems with that?

There's obviously some overlap right now, here's my thoughts on removing it:

  • The Best Practices Readability and Documentation sections should probably both be in the Style Guide instead.
  • The Style Guide section on PowerShell Supported Version and Security should probably both be in Best Practices instead.
  • The Best Practices "TOOL"s section should lean heavily on the Style Guide for details, but deserves to stay in Best Practices.

Is there guidance for using the 3 states of a switch parameter?

I'm tempted to create a function that does different things when the switch parameter is missing, true, and false.

Using the 3 states cuts down on the number of switch parameters, and sometimes makes the function easier to understand, but I don't know how widely this practice is accepted by the community.

Example

For an example of when a 2-state switch parameter is misleading, look at Get-ChildItem -File -Directory.

    #list all - ok
    Get-ChildItem

    #list directories - ok
    Get-ChildItem -Directory

    #list files - ok
    Get-ChildItem -File

    #list directories - ok
    Get-ChildItem -File:$false -Directory

    #list files - ok
    Get-ChildItem -File -Directory:$false

So far so good, but look at this.

    #list all - not intuitive
    Get-ChildItem -Directory:$false

    #list all - not intuitive
    Get-ChildItem -File:$false

    #list nothing! - very surprising
    Get-ChildItem -File -Directory

Using 3 states for this is much simpler for the script writer. It may be easier for the user too.

    #list all
    Get-ChildItem

    #list directories
    Get-ChildItem -Attributes D

    #list files
    Get-ChildItem -Attributes !D

    #there is no attribute for files

Question

So, would it be okay to use 3 states for switch parameters like the -Directory switch for Get-ChildItem?

New Practices for PowerShell Core

There are going to be a few new things which we need to keep in mind to make our scripts work across multiple platforms. Windows, Nano, Linux, FreeBSD and OSX, ARM and IoT ...

Best Practices for Scripts and Modules:

  1. Don't put aliases into scripts. Aliases are (currently) different on each platform. Double-check using a tool like ResolveAlias.
  2. Add the shebang to scripts: #!/usr/bin/env pwsh (or the more fragile: #!/usr/bin/pwsh -noprofile)
  3. Save scripts with Unix style line endings, or the shebang will not work.
    Mac and Windows both accept \n but the Unix shell interpreter will choke on the carriage return in the shebang. If you don't add the shebang, this doesn't matter. Note that you can always fix it:
    Set-Content $scriptPath ((Get-Content $scriptPath -raw) -replace "\r") -Encoding utf8
  4. Always encode in utf-8.
  5. Be careful with paths. Use forward slashes. Use Join-Path and Split-Path
  6. ONLY use the new -PSEdition value allowed for #requires or module manifests, when you need to restrict to Core, not for Desktop, since it only works in PowerShell 5.1+
  7. ALWAYS use three digits for versions (i.e. 5.0.0, not 5.0) because they may be parsed as SemanticVersion which currently doesn't work unless you provide the patch version explicitly.
  8. [System.Environment]::CurrentDirectory doesn't in .Net Core. But if you need to call path-sensitive .Net APIs, you need to use [System.IO.Directory]::SetCurrentDirectory( (Get-Location -PSProvider FileSystem).ProviderPath ) to update the environment. Note that PowerShell sets the working directory fine when launching apps...

Finally: Test the code on Linux and Nano, if possible. There are differences.

ProTips

Use this in your profile: $PSDefaultParameterValues["Out-File:Encoding"] = "UTF8" to help with 1.

Don't forget you can install PowerShell 6 alphas side-by-side on Windows (they install to Program Files, and don't use the same profile or module paths). You don't have to set up docker or a VM to get started. It's just that in that scenario, you have access to things you won't have access to on Unix (like Set-ExecutionPolicy), so you should test elsewhere before publishing.

Cmdlet Differences

Dealing with Paths

Given you're accepting a $Path parameter in your function...

I think there are 4 scenarios, which are the flip sides of two conditions:

Condition One: Does it need to already exist?
Condition Two: Do you care about the provider?
  1. You need the $path to exist, you don't care about the provider
  2. You don't care if the $path exists, you don't care about the provider
  3. You need the $path to exist, you need it to be a specific provider
  4. You don't care if it exists, you need it to be a specific provider

Have I missed any scenarios?

By far the hardest of these to deal with (I think) is when the path may not exist yet, and it has to be a specific provider. For instance, if you want to take a relative or absolute path to a file and pass it to $xml.Save($path)

If the path is relative, you must convert it to a full path, because [Environment]::CurrentDirectory might as well be randomized. Otherwise, you could probably leave it alone. But assuming it might be relative, and assuming your script might be called while $pwd is in a non-filesystem folder ... what do you do?

Does this work? Is there something simpler?

$PsCmdlet.SessionState.Path.PushCurrentLocation("fallback"); 
$PsCmdlet.SessionState.Path.SetLocation( $PsCmdlet.SessionState.Path.CurrentFileSystemLocation ); 
$Path = $PsCmdlet.SessionState.Path.GetUnresolvedProviderPathFromPSPath($Path); 
$PsCmdlet.SessionState.Path.PopLocation("fallback");

Add Section on Variable Validation outside of Parameters.

In PowerShell v3, the ability to use Validation attributes on variables outside of parameters was added.

It is mentioned here in the New v3 Language Features Page.

Here are a few quick examples.

ValidateRange

PS> [ValidateRange(1,10)][int]$ValidateRange = 1
PS> $ValidateRange = 3
PS> $ValidateRange = 11 
The variable cannot be validated because the value 11 is not a valid value for the x variable. 
At line:1 char:1 
+ $ValidateRange = 11 
+ ~~~~~~~ 
    + CategoryInfo          : MetadataError: (:) [], ValidationMetadataException 
    + FullyQualifiedErrorId : ValidateSetFailure

ValidateSet

PS> [ValidateSet('Test1','Test2','Test3')][string]$ValidateSet = 'Test1'
PS> $ValidateSet = 'Test2'
PS> $ValidateSet = 'Test4'
The variable cannot be validated because the value Test4 is not a valid value for the x variable.
At line:1 char:1
+ $ValidateSet = 'Test4'
+ ~~~~~~~~~~~~
    + CategoryInfo          : MetadataError: (:) [], ValidationMetadataException
    + FullyQualifiedErrorId : ValidateSetFailure

ValidateNotNullOrEmpty

PS> [ValidateNotNullOrEmpty()][string]$ValidateNotNullOrEmpty = 'Test1'
PS> $ValidateNotNullOrEmpty = 'Test'
PS> $ValidateNotNullOrEmpty = ''
The variable cannot be validated because the value  is not a valid value for the x variable.
At line:1 char:1
+ $ValidateNotNullOrEmpty = ''
+ ~~~~~~~
    + CategoryInfo          : MetadataError: (:) [], ValidationMetadataException
    + FullyQualifiedErrorId : ValidateSetFailure

ValidateCount

PS> [ValidateCount(1,10)][array]$ValidateCount = @(1..9)
PS> $ValidateCount += 'test'
PS> $ValidateCount += 'test2'
The variable cannot be validated because the value System.Object[] is not a valid value for the ValidateCount variable.
At line:1 char:1
+ $ValidateCount += 'test2'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : MetadataError: (:) [], ValidationMetadataException
    + FullyQualifiedErrorId : ValidateSetFailure

ValidateLength

PS> [ValidateCount(1,10)][string]$ValidateLength = 'Test1'
PS> $ValidateLength = 'Testing2'
PS> $ValidateLength = 'TestOverTen'
The variable cannot be validated because the value TestOverTen is not a valid value for the ValidateLength variable.
At line:1 char:1
+ $ValidateLength = 'TestOverTen'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : MetadataError: (:) [], ValidationMetadataException
    + FullyQualifiedErrorId : ValidateSetFailure

ValidatePattern

PS> [ValidatePattern('^\d{3}[-.]?\d{3}[-.]?\d{4}$')][string]$ValidatePattern = '800-123-4567'
PS> $ValidatePattern = '000-000-0000
PS> $ValidatePattern = '000-000-00000'
The variable cannot be validated because the value 000-000-00000 is not a valid value for the ValidatePattern variable.
At line:1 char:1
+ $ValidatePattern = '000-000-00000'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : MetadataError: (:) [], ValidationMetadataException
    + FullyQualifiedErrorId : ValidateSetFailure

ValidateScript

PS> [ValidateScript({Test-Path $_})][string]$ValidateScript = 'C:\Windows\System32\cmd.exe'
PS> $ValidateScript = 'C:\Windows\System32\powercfg.exe'
PS> $ValidateScript = 'C:\Windows\System32\DoesNotExist.exe'
The variable cannot be validated because the value C:\Windows\System32\DoesNotExist.exe is not a valid value for the ValidateScript variable.
At line:1 char:1
+ $ValidateScript = 'C:\Windows\System32\DoesNotExist.exe'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : MetadataError: (:) [], ValidationMetadataException
    + FullyQualifiedErrorId : ValidateSetFailure

Strict Mode

What's the community's view on Strict Mode? Is there one? Should there be one?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.