Using ARM Client to explore Azure Resources

June 18, 2015

 

Introduction

There is a tool called ARMClient using which you can examine your Azure Resources. This tool is available in chocolatey .

PROs and CONs

There are PROs and CONs to using this tool. The PRO is regardless of PS, CLI, ARM you can query Azure for the resources and properties using RAW calls into it. So you won’t be faced with the variations and nuances of some resources not being available in one method and available only in another. The CON is be prepared to use RAWly formatted queries (it is not a GUI) and receive raw JSON.

Sample Demos

Once you have it installed fire up your ARMClient. Establish your connection. You can just type :

ARMCLIENT LOGIN

This should pop up a login dialog, where you will enter your credentials to get a session connected.

From then on you can start PUTting, GETting, and do various other operations.

Listing your subscriptions

GETting your subscriptions.

clip_image002[5]

Now for subsequent queries as a convenience use the SET command to set a variable SUB to use in further queries.

C:\WINDOWS\system32>set SUB=/subscriptions/<SubscriptionID>

Eg. clip_image003[5]

Listing your resources using GET

The below call to resourceGroups will list all of your resources for the specific subscription

armclient GET %SUB%/resourceGroups?api-version=2014-04-01

clip_image005[5]

GETting the properties of a specific resource

Now to GET the properties of a specific resource do a GET on the resource. Below I am GET-ting the properties of the resource “payasyougodeleteme” which is of type WebSite by using

armclient GET %SUB%/resourceGroups/PayAsYouGoDeleteMe/providers/Microsoft.Web/sites?api-version=2014-04-01

clip_image007[5]

LIST the WebSite Configuration properties

List the WebSite’s Config properties with the following GET call.

armclient GET %SUB%/resourceGroups/payasyougodeleteme/providers/Microsoft.Web/sites/payasyougodeletesite/config/web?api-version=2014-11-01

SNAGHTML59edcf0

Now CHANGE a few of the Configuration properties.

1. phpVersion to 5.6

2. pythonVersion to 3.4 and

3. websocketsEnabled to true

using the property fragment in JSON as parameter

clip_image010[5]

Use the above as input (either as a referred file by using the @(FileName) OR inline ) in the call  to the PUT itself as shown below. Here I have used the inline input method.

armclient PUT %SUB%/resourceGroups/payasyougodeleteme/providers/Microsoft.Web/sites/payasyougodeletesite/config/web?api-version=2014-11-01 "{ "properties": { "phpVersion": "5.6","pythonVersion":"3.4","webSocketsEnabled":"true" } }”

clip_image012[5]

This returns the JSON shown above, with the new property values set as submitted by the PUT call.

CONCLUSION

Tools such as ARM Client and ARM Explorer are powerful tools to work directly against the underlying resources without regard to the limitations of PS, CLI or ARM. They operate directly against the actual objects in Azure without intermediaries like PS, CLI etc.,..

Using ARM Client to explore Azure Resources

June 18, 2015

 

Introduction

There is a tool called ARMClient using which you can examine your Azure Resources. This tool is available in chocolatey .

PROs and CONs

There are PROs and CONs to using this tool. The PRO is regardless of PS, CLI, ARM you can query Azure for the resources and properties using RAW calls into it. So you won’t be faced with the variations and nuances of some resources not being available in one method and available only in another. The CON is be prepared to use RAWly formatted queries (it is not a GUI) and receive raw JSON.

Sample Demos

Once you have it installed fire up your ARMClient. Establish your connection. From then on you can start PUTting, GETting, and do various other operations.

Listing your subscriptions

GETting your subscriptions.

clip_image002[5]

Now for subsequent queries as a convenience use the SET command to set a variable SUB to use in further queries.

C:\WINDOWS\system32>set SUB=/subscriptions/<SubscriptionID>

Eg. clip_image003[5]

Listing your resources using GET

The below call to resourceGroups will list all of your resources for the specific subscription

armclient GET %SUB%/resourceGroups?api-version=2014-04-01

clip_image005[5]

GETting the properties of a specific resource

Now to GET the properties of a specific resource do a GET on the resource. Below I am GET-ting the properties of the resource “payasyougodeleteme” which is of type WebSite by using

armclient GET %SUB%/resourceGroups/PayAsYouGoDeleteMe/providers/Microsoft.Web/sites?api-version=2014-04-01

clip_image007[5]

LIST the WebSite Configuration properties

List the WebSite’s Config properties with the following GET call.

armclient GET %SUB%/resourceGroups/payasyougodeleteme/providers/Microsoft.Web/sites/payasyougodeletesite/config/web?api-version=2014-11-01

SNAGHTML59edcf0

Now CHANGE a few of the Configuration properties.

1. phpVersion to 5.6

2. pythonVersion to 3.4 and

3. websocketsEnabled to true

using the property fragment in JSON as parameter

clip_image010[5]

Use the above as input (either as a referred file by using the @(FileName) OR inline ) in the call  to the PUT itself as shown below. Here I have used the inline input method.

armclient PUT %SUB%/resourceGroups/payasyougodeleteme/providers/Microsoft.Web/sites/payasyougodeletesite/config/web?api-version=2014-11-01 "{ "properties": { "phpVersion": "5.6","pythonVersion":"3.4","webSocketsEnabled":"true" } }”

clip_image012[5]

This returns the JSON shown above, with the new property values set as submitted by the PUT call.

CONCLUSION

Tools such as ARM Client and ARM Explorer are powerful tools to work directly against the underlying resources without regard to the limitations of PS, CLI or ARM. They operate directly against the actual objects in Azure without intermediaries like PS, CLI etc.,..

Git into VSO and using Build vNext to pipeline into Azure deployment slots.

June 16, 2015

 

Building from VSO and deploying to Azure

Create your solutions in VS

Note that I just created a solution in Visual Studio and I have not checked it anywhere, but to a local GIT Repo on my system that is not yet connected to any Team Project. Next is we want to G(e)it it into VSO.

clip_image001

Create Git Rep on VSO and clone your code into it.

Go to your VSO account and create a Team Project if you do not have one and choose Git as your REPO. IF you already have a Git REPO in VSO you can push into that as well. Now to push the code into Git we are going to use CLI to push to GIT.

clip_image003

Install the Git Command line tools

clip_image005

Establish REMOTE REPO

Come to your Visual Studio Team Explorer and establish the GTI Repo you created above in VSO as your remote repo to your local Git Repo on your workstation. See below on how to do this.

clip_image007

Checkin in from the Local Git Repo you created which at this point is NOT in any way connected to any remote REPO.

Go to a Visual Studio Command prompt and using full path to Git PUSH you code into the remote Repo. (Note that the Git path does not get added to Visual Studio command prompt, so you have to full-path into it as shown below).

clip_image009

See your code in VSO

clip_image011

Having got code into the Git Repo now check out features (a) Build vNext and (b) RM deployment to Azure Environment (Windows and Linux) and (c) ARMing them

Build vNext

Create a DEV branch in your local REPO

 

clip_image012

resulting in >>>>>

 clip_image013

From now on we work in DEV branch and push to master when required.

Set up a build definition on VSO for this project and enable CI on DEV branch

Now go to VSO and create a Build definition. A few key screen shots are shown below.

clip_image014

Choose Repository to be DEV branch

clip_image016

Enable CI Trigger

clip_image017

Save your Build vNext definition.

 

Now you have to set up things in VSO, time to get some action in your code. Get back into your favorite editor (VS, or VS Code, Eclipse or whatever – notepad) and Edit Code and Trigger CI by committing into the remote REPO. When you commit to repo, note that your first commit will be local to the repo. Then you need to SYNC to remote (ie VSO) so that it triggers the CI on VSO.

Once you do that, you should be able to see your build getting kicked off in VSO’s hosted build controller (by default) and getting deployed into the SLOT you wanted it to go.

clip_image019

Note that you can also have your own build machine pointing to your VSO and build off there. This is explained separately below in another section.

Deploy to Azure Web Site from your CI build

Add your (OR connect your) Azure Subscription as Service feeding from VSO

Go to the gear icon on your project from VSO and then go to Services and click on Azure as shown below and enlist your Azure subscription.

clip_image021

Add Azure-PublishAzureDeployment.ps1 PowerShell and set the script arguments

(Note that this is a kind of workaround OR another way to do it) If on the other hand you set up your VS project as a proper Azure based Web Site project, meaning used the OOB Azure Web Template that came in Visual Studio, then all parameters would be fed correctly and your web site would get deployed. You don’t have to go this route.

Download the PowerShell from the below URL.

https://github.com/Microsoft/vso-agent-tasks/blob/master/Tasks/AzureWebPowerShellDeployment/Publish-AzureWebDeployment.ps1

Add it into your SRC tree and refer to this script in the Script Path as shown below. For those of you who use Release Manager, and do DSC based deployments – this part of checking your deployment script into SRC code will sound familiar. It also drives home the point your deployment should be treated as CODE and should be checked in and version controlled. Essentially we checked in the deployment script and will then use this deployment script in the actual deployment, when we add the deployment step as shown below. Note that to logically reach to the deployment step, we first have to (a) Build the solution, (b) Test the solution etc.,. in order to get to the deployment. IF you see below these have been made into discrete individual steps that you add based on what you need by clicking on "Add Build Step". In the Earlier XAML builds these were all entwined in a workflow.

Now since PowerShell is an option to execute, it provides a lot of flexibility if you wanted to add things which you think are not in the default set of "Steps" provided OOB. And you could then refer to these PowerShell Scripts (which you should of course check in along with your SRC code) in the PowerShell Step as shown below by the ScriptPath component.

clip_image022

Script Arguments:

-WebSiteName "MyTestWebSiteinAzure" -Package "$(build.stagingDirectory)\BldAndRMvNxt.zip" -ConnectedServiceName "000000000-000000000-0000-0000000000" $AdditionalArguments "-Slot QA"

Once you had set this correctly, you should see your Code changes and checkins being propagated and deployed straight to your Azure Web Site.

Using RM and Deploying to Pipelines

The current version of RM (ie RM on premises) as of today cannot access the build definitions created on VSO, though they could take the XAML builds. However with RM capabilities that are being surfaced in the online version (Build video – http://channel9.msdn.com/Events/Build/2015/3-649 ) this will not be a limitation. That opens up a richer set of deployment options for building complex DEV, QA, PROD Release pipelines.

Agent Choices when using VSO

When building with VSO you have a plethora of Agent choices. The most obvious is the hosted one provided by VSO, so everything happens under the covers for you.

You also have a choice of building your own Build Agents.

image

For this you have to download the Agent binaries as shown above, along with the config file from VSO portal and then you could deploy to your servers and configure to communicate with your VSO as shown below. It will ask for your account in the popup to authenticate to the VSO.

clip_image024

I set the agent to run interactively just for grins to see what it spews out as I use it. This would be useful in those situations you want to debug.

clip_image025

Once you have set this up, you should see your agent in the pool you added it to. In my case the default group as shown below.

clip_image026

Once you do this you can now start building on your own agent instead of the hosted build agents, if you are concerned about billing/consumption.

Deleting “TEAM” created in Team Projects in TFS

June 16, 2015

 

How does one delete a “Team” that was created in a Team Project. Maybe you were just playing with it, OR some one in the DEV team created “Team” and misspelled it and then wants to delete this because it pollutes the UI and messes up options in the reports.

It  might not be very obvious how to do this. Hence a blog post.

 

If you go to the Admin Web Page (by clicking the Gear Icon on the right) for the Project in the Server, you should be able to see the list of teams.

1. Go to your Specific Team Project’s Web Portal

2. Right Click on the “Gear” Icon at the right to take you the Admin Page for the Web Project.

3. Once on the landing page go to the “Security Tab”

4. Click on the Team you want to delete. (Note you cannot delete the default team that gets created when you created the Team Project)

5. On the details tab to the right, click on the edit Drop arrow as shown.

clip_image001

6. Now click on “Delete”. Make sure you are not deleting by mistake some “TEAM” you or the end development teams need, before you delete.

7. Now hit delete and you are done.

This is also accessible from the Overview page as shown below:-

image

Using Azure ARM Template’s OUTPUT section

June 16, 2015

 

Extract using  the output section, the properties of various resource you deployed using an ARM Template

 

Scenario

 

You were deploying your resources using an ARM template and wanted to print out the names of some of the resources you created/deployed/updated. This could be just to make sure your deployment was executed successfully OR pipe the output into a subsequent command.

In this case I wanted to print/extract/dump the parameters for the properties related to the network, more specifically the ones that were passed as parameters to the template file. Snippet of JSON File where these parameters are defined is shown below.

 

clip_image002

 

Now in the “OUTPUTS” section, they are simply referred to by the parameters function, as shown below.

 

clip_image004

 

Executing the ARM deployment using New-AzureResourceGroupDeployment.

Note:

  1. You should execute the template deployment with New-AzureResourceGroupDeployment PS
  2. You should have created the ResourceGroup prior to the execution of New-AzureResourceGroupDeployment.
  3. If instead you used New-AzureResourceGroup instead it will execute the deployment, but will not print the outputs.

The following shows the execution of the template and the outputs, getting printed in the outputs section.

clip_image006

 

Azure ARM templates–Tips on using Outputs

June 16, 2015

 

Introduction

Continuing the series on ARM templates, here we are going to explore another variation of using the OUTPUTS section. In the previous blog post I already showed how to extract values you passed into the template via parameters. Here we are going to see a slightly advanced usage.

 

Objective

We want to be able to print at the end of the ARM deployment using OUTPUTS section certain properties of the deployment we did with the ARM template. You would do this to confirm that the ARM deployment did indeed do what you wanted it to do, and as well as to perhaps pipe the output onto a subsequent command and do further operations on it.

Reference

To go the full schema definition that the ARM template relies on refer here

https://azure.microsoft.com/en-us/documentation/articles/resource-group-authoring-templates/

 

JSON file and named resources being created

 

In this example we are interested in outputting the full property set of the resources we are creating. For instance in the below snippet a IPAddress and a network interface resource specification  named as “publicip” and “nicapache” are being requested, and once this resource gets created it also has a set of properties associated with it. In the output section the full property list will be printed out, because we are outputting by referring to the entire object. The full template file is attached to this post at the end.

 

SNAGHTML3080c26

 

You can also see the other (two) named networkInterface resource being requested twice – via the copyindex() function, looped by the count parameter, which is set to 2.  They get named as “nicmemchached0” and “nicmemcached1” , where 0 and 1 get appended to them.

 

image

 

OUTPUT section of JSON file

Here in the outputs section the full set of properties of the network interface are being printed. As you can see all named resources are “referenced” by their resource name in the outputs section.

 

clip_image004

 

In the above snippet, you can also see the reference to the two named resources in the OUTPUTS section, “nicmemchached0” and “nicmemcached1” in the template file, where 0 and 1 get appended by the copyindex() function, looped by the count parameter, which is set to 2.

Reference – Output section’s schema allowed in ARM template

The following is the schema definition for the OUTPUTS section that will be acceptable for API version 2015-01-01. So your output specification should at a minimum confirm to the below schema definition. It tells how you can format and what is needed. Note that as the product evolves, one can expect this to also evolve to accommodate additional features.

clip_image005

 

Executing the ARM based deployment via a PowerShell

It should be noted that PS is passing the ARM template to the underlying ARM cells in the background. This should NOT be confused with actually using PS to do deployments as you normally would. Once you get the ARM template worked out correctly you could repeatedly fire it, whereas if you had an imperative PS script, you cannot as it would error and say the resource already exists. This will not be the behavior you see when you use ARM templates. In general as a best practice you should use ARM templates to do your sophisticated deployments as they are repeatable and consistent, which is not the case with using straight PS in an imperative manner.

Step#1 – Create a ResourceGroup

In my case I named it TestARMOUTPUT. Currently I used WestUS as the location. If sometimes the underlying resource or feature/capability you want to use is NOT rolled out in a specific region, then you want to deploy to a region the capability has been turned on. Eventually every Azure region is likely to have all of the capabilities.

New-AzureResourceGroup -Name TestARMOUTPUT -location "West US" -force

Step#2 – Deploy the ARM template into your resource group

Now deploy the resources you specified in the ARM template using the resource group you created previously.

New-AzureResourceGroupDeployment -Name TestARMOUTPUT -location "West US" -templatefile .\templates\linux-vm.json

It will ask for your response to a few parameters and one has to enter them. You can choose your own identifier names, instead of what is shown below.

 

clip_image007

 

OUTPUT on executing the deployment using the ARM template

Once the template execution starts going, if there are any errors you have to correct them till it runs through to the end to completion. Incidentally you will see the OUTPUTS section gets printed ONLY if the template gets executed successfully. Otherwise you will not see it. This is another good reason to use OUPUTS as it confirms the template did get executed successfully.

The following is the output window, with all of the four values that were requested by the OUTPUTS section of the ARM template (the complete template is attached to this post).

You can see the Name, Type and Value of the resources printed out as requested in the Template. There are 4 outputs, with their names (in red), type (in green) and actual value (in yellow) highlighted below.

clip_image009

Snipping continued.

clip_image011

 

As you can see above the two nic-interfaces on their own IP Addresses have been created, one on the IP address 10.1.1.4 and another on 10.1.1.5 where the PrivateIPAddresses are listed.

Portal View

Now if you go the portal you can view the resources you created in the Resources  blade as shown below.

image

 

image

And if you click through to the deployment history you can see the OUTPUTS clearly and you can copy the entire text out if needed as shown below.

image

 

Credits

I worked with Microsoft Principal Group PM Charles Lamanna to get the source for this blog post and he also helped clarify.

Headless Nano Server from Microsoft

May 6, 2015

Headless Nano Server

Creating the Nano Server VHD from Server 2016 CTP

clip_image002

Attaching the Nano server VHD and starting up in Hyper –v

clip_image004

Connecting to the dark underground – GUI less and Headless Nano Server

clip_image006

Some consideration on TFS SETUP (or TFS ADMIN) account

April 8, 2015

In earlier versions of the product the TFSSETUP (TFSSADMIN) account was explicitly referenced. It used to be a placeholder name for an account that a customer could use to do the installation of TFS.

If you were installing TFS in a domain environment this had to be a domain account, and if TFS was getting deployed in a standalone/workgroup kind of environment it could be a workgroup account. Later versions of installation guide don’t explicitly call out a TFSSETUP/TFSADMIN account, other than saying the account that is used to install TFS must have administrative privileges on those servers where the TFS components are being laid.

From a practical standpoint it is better to have an account (again let’s call it TFSSETUP/TFSADMIN – a mere placeholder name) to do all of the installation of various components of TFS, and then use this same account to do any subsequent patching.

Now this discussion pertains to TFS in a domain environment, as that is the predominant deployment scenario for TFS in a business environment. You can get away with using one of the user accounts (say the individual account of someone whose role is that of a TFSAdministrator)  also in lieu of an explicit TFSSETUP Account. However there are some drawbacks to doing this, and it is good to use a specific domain account for setup/admin in your domain environment other than an individual user’s domain account to do the TFS setup and servicing operations. Some of these are listed below.

  • Consider the scenario when the TFS Admin (or the user owning the individual account) decides to leave the organization much later after he/she had set up TFS. It is most likely this user is purged from the AD in the corporate directory.  So if you have to do some update later on, you will now have to give a new individual organizational account (create this if required first) and then add admin privileges on the servers in question. How long is this going to take in your domain environment – minutes or days?

 

  • You wanted to do a patching (security or cumulative updates CUs)  to the TFS components at a later point in time. See #1.

 

  • You wanted to run TFSConfig commands with various options and parameters as part of a subsequent hardware upgrade or part of attaching new collections – You can again get by with using a user account in the domain, but it would be easier if you ran all such operations with a specific TFSSETUP/TFSADMIN domain account created for this purpose.

 

  • Audit – If there were multiple individual user accounts that altered TFS components. Instead it would be easier if just the TFSSetup or TFSAdmin account was the only one used to do setup, service and patch TFS and its password was just given to the TFSAdmins. So you will always know that it is only the TFS Admins that ever touch the TFS installation.

 

  • Preventing inadvertent use of the TFSSetup/TFSAdmin account. Once the initial TFS setups and configuration are done, simply disable the account. And whenever you need to service/patch TFS – re-enable the setup/admin account, do the servicing operation and then disable it again. In this way you can make sure this account is used only at predetermined points in time (presumably from tickets created in your ticketing system) and not have the temptation to go and make changes to a running TFS environment willy-nilly if it were a regular organizational/individual user account.

 

  • As opposed to individual organizational accounts, the  TFSSetup/TFSAdmin are less likely to have variances in their priviliges/permissions in the domain. Say when TFS was first setup by “John Doe” his individual user accounts’ set of privileges and permissions could be totally different from the second TFSAdmin say “Sue Ann”, who came on board a year down the line. Sometimes these can have side effects, which would not be the case if you were just uniformly using something like a TFSsetup/TFSAdmin account throughout the life-cycle of the TFS installation.

Some consideration on TFS SETUP or TFS ADMIN account

April 8, 2015

In earlier versions of the product the TFSSETUP (TFSSADMIN) account was explicitly referenced. It used to be a placeholder name for an account that a customer could use to do the installation of TFS.

If you were installing TFS in a domain environment this had to be a domain account, and if TFS was getting deployed in a standalone/workgroup kind of environment it could be a workgroup account. Later versions of installation guide don’t explicitly call out a TFSSETUP/TFSADMIN account, other than saying the account that is used to install TFS must have administrative privileges on those servers where the TFS components are being laid.

From a practical standpoint it is better to have an account (again let’s call it TFSSETUP/TFSADMIN – a mere placeholder name) to do all of the installation of various components of TFS, and then use this same account to do any subsequent patching. Now let’s mainly discuss TFS in a domain environment, as that is the predominant deployment that happens in businesses. You can get away with using one of the user accounts (say someone whose role is that of a TFSAdministrator)  also in lieu of an explicit TFSSETUP Account. However there are some drawbacks to doing this, and it is good to use a specific domain account for setup/admin in your domain environment other than an individual user’s domain account to do the TFS setup and servicing operations. Some of these are listed below.

  • Consider the scenario when the TFS Admin (or the user owning the individual account) decides to leave the organization much later after he/she had set up TFS? So if you have to do some update later on, you will now have to give a new individual organizational account (create this if required first) and then add admin privileges on the servers in question. How long is this going to take in your domain environment – minutes or days?

 

  • You wanted to do a patching to the TFS components at a later point in time. See #1.

 

  • You wanted to run TFSConfig commands with various options and parameters as part of a subsequent hardware upgrade or part of attaching new collections – You can again get by with using a user account in the domain, but it would be easier if you ran all such operations with a specific TFSSETUP/TFSADMIN domain account created for this purpose.

 

  • Audit – If there were multiple individual user accounts that altered TFS components. Instead it would be easier if just the TFSSetup or TFSAdmin account was the only one used to do setup, service and patch TFS and its password was just given to the TFSAdmins. So you will always know that it is only the TFS Admins that ever touch the TFS installation.

 

  • Preventing inadvertent use of the TFSSetup/TFSAdmin account. Once the initial TFS setups and configuration are done, simply disable the account. And whenever you need to service/patch TFS – re-enable the setup/admin account, do the servicing operation and then disable it again. In this way you can make sure this account is used only at predetermined points in time and not have the temptation to go and make changes to a running TFS environment willy/nilly if it were a regular organizational/individual user account.

 

  • As opposed to individual organizational accounts, the  TFSSetup/TFSAdmin are less likely to have variances in their priviliges/permissions in the domain. Say when TFS was first setup by “John Doe” his individual user accounts’ set of priviligeses and permissions could be totally different from the second TFSAdmin say “Sue Ann”, who came on board a year down the line. Sometimes these can have side effects, which would not be the case if you were just uniformly using something like a TFSsetup/TFSAdmin account throughout the life-cycle of the TFS installation.

Cloud deployment on a developers finger tips

December 1, 2014

New features and Cloud Capabilities at a Developers’ Finger tips

Introduction:

The Azure Tooling team within VS has provided a new set of templates that make it easy for a Developer or Development team to programmatically control the spin up and tear down of multiple VMs in a controlled fashion.

Benefits and Applications

The kinds of scenarios that some DEV team or ISV can put this to use are quite wide and varied. As mentioned above a DEV team can have a solution maintained as CODE just for spin up and deploy, and share it within the team do launch the setup and teardown on demand OR on schedule.

An ISV or another product group could take this and package a configurable (in terms of the number and types of VMs) deployment solution that can then be used by their end users or customers to spin up a desired configuration on demand or on schedule. What would earlier be a series of operations initiated through the portal or PowerShell can now be done with a single click when you finally produce a tested and working EXECUTABLE that can then be distributed – of course the subscription ID and the settings file which I show below that have been pulled and hard coded would have to be fetched from the end user programmatically (which is fairly trivial to do) as input to the executable.

One can also think of something like the Lab Manager for TFS kinds of solutions being entirely wrapped up using these kinds of constructs.

HOW TO

We are going to explore the “Deploy and Manage Virtual Machine” template that comes with Azure SDK 2.5 installed on top of VS 2015 as shown below.

Open up VS 2015 Preview and launch a project as shown below.

clip_image002

Solution Folder

clip_image003

Getting your Client authorization in Order

Now as you know to do any operation from any sort of Client against the Azure platform, you need your publish settings file to be on the client (be it PS, Portal or VS) to communicate back to Azure in a secure manner. So first download the publish settings file and store it in your Solution Folder.

Getting the publish settings file

Open your program.cs file and click on the link below to download the settings file. I like to keep it within my VS solution folder and so saved it right in the folder structure of my solution. Change the path file as shown below.

clip_image005

Further Exploration

The files Program.cs and VMManagementController.cs contain methods and parameters that you can tweak and configure for Windows based VMs. This can be enhanced to do the similar type of operations for Linux VMs also.

VMManagementController.cs file uses various methods as shown below, and it is quite possible to add or to modify the existing methods to enhance the default capability provided. When the Controller code gets called in from main() in Program.cs

image

Examine the various methods in here to modify and add to the VM customizations and parameters you can to the defaults.

Publish your deployment solution as a WebJob

clip_image001

Choose your run-mode as shown below.

clip_image003

This will create a publish settings file as shown below.

clip_image005[6]

clip_image007clip_image009

clip_image010

In your build output Windows you can see the following flashing through which compiles, builds and then if successful then goes on to publish the bits to your Azure Website to run as a Web Job.

clip_image012

clip_image013

Now to run the App OR launch the Web Job

 

You can now run this solution right from within VS or from within the portal.

Running from the portal

To run from the portal you can login to the Azure Portal and navigate to the Web Job and initiate it from there, is you have set it run on-Demand as I have done.

Navigate to your Azure portal and expand on the WebApplication which you had chosen during the publish phase. You will see WebJobs that you had deployed earlier.

clip_image002[4]

Click on that to take you to the Web Job as shown below.

clip_image003[4]

clip_image004

Click Run to Start the WebJob. Once it runs to completion you should see your VMs up and running.

Running from VS

To run from within VS just start like any normal VS project where you click on “Start” or F5 or Ctrl+F5 and you get the familiar and convenient experience of setting break points and debugging if needed.

The following are a series of screen shots that will show up on the Console when you launch from within VS. Note that you can customize this set of messages that pop up on the Console.

clip_image006

clip_image007

Leaving this AS-IS you should now be RDP this to your VM. You could come back and shutdown the VM.

Conclusion

Now you might ask what’s the big deal between spinning up VMs like this and as opposed to clicking through either the Portal or from the Server Explorer in VS. The difference is the power you can use to leverage a set of VMs customized to an N-Tier application and exploring scenarios when you can deliver customized solutions on top of the bare VM creation ability from either the portal or PS or Server Explorer. The scenarios are endless for an ISV to exploit a la TFS Lab Manager or Release Manager, but now directly under the control of your code. This is pretty neat for those with the time and resources to craft solutions for themselves or provide a configurable and customizable multiple VM creation package, especially for an ISV.


Follow

Get every new post delivered to your Inbox.