VSTS: Collect Telemetry for build and release tasks using Application Insights

Developers of extensions for Visual Studio Team Services (or Team Foundation Server) tend to use Application Insights to collect telemetry about their extensions.

Metrics like usage, performance, errors are collected to know which features are used, which features are working, how they are performing and if they are failing to improve them and better deliver value to its end users.

There are several types of extensions

  • Dashboard widgets
  • Hubs
  • Actions
  • Work Item form elements
  • Build and release tasks


Except for build and release tasks, all other extensions types run in a web browser, so it is easy to collect telemetry and Application Insights is a natural fit.


There is plenty of information how to integrate Application Insights with your extensions  (here and here just to reference a few),  if you use the ALM | Devops Rangers generator-vsts-extension to generate the skeleton of your extension Application Insights support is automatically (optional) added for you.




At first look, it might seem that we cannot use Application Insights for build & release tasks since they do not run in a browser and we are stuck with the statistics that Visual Studio Marketplace provides us (installs and uninstalls numbers).

In this post I’m going to show, how you can use Application Insights to measure usage of build & release tasks (referred as tasks from now on)

Tasks are implemented in either PowerShell or JavaScript, tasks should preferably  be implemented in JavaScript since they can be executed in all platforms (build and release agent is cross platform and can run in Windows, Linux or MacOs) unless there is a strong reason to implement them in PowerShell (which can only run in Windows agent)


I’m going to explain how Application Insights can be used in a task implemented in JavaScript (using TypeScript  to be more exact) but the same technique can be used in PowerShell


Below you can see the most straightforward implementation of an extension (in TypeScript)

import tl = require('vsts-task-lib/task');
async function run() {
    try {
    } catch (err) {
        tl.setResult(tl.TaskResult.Failed, err.message);


In order to collect telemetry we need to install Application Insights for Node.js NPM module (eg: npm install applicationinsights –save)

Next, we need to import the module and initialize it by adding the follow snippet (outside the Run function) [don’t forget to enter your Application Insights instrumentation key or externalize it)

var client = appInsights.defaultClient;

Application Insights SDK is initialized, auto collection has been disabled and data collection has been started.

We now need to collect information we want explicitly.

Tracking Usage


For our example, these are the requirements

  • Track how many executions times an extension had
  • How many accounts/collections are actively using the extension
  • Get errors in order to track issues that users don’t bother to report.
  • Don’t collect any user data nor use any information that may lead to user identification; all information is anonymous.


You may have others, but these are the ones we are going to solve in this post.

There are several kind of events we can send to Application Insights, we can track things like Page views, events, metrics, exceptions, request, log traces or dependency.

Since we only want to track usage we have two choices track a request or track a custom event.


Track using a request event


Conceptually the execution of a task is not a request; the request represents a web request. Semantics apart the request is a suitable way to track task executions even if we are stretching the request definition a little.

If we use a request these are some things we get out of the box

  • We can track the response time of the task  execution (typically this doesn’t matter since a task may be executed in machines with very different specs, or different data specs)
  • Usage (and performance) data is visible on application insights summary blade




We need to call the track request API call, and provide the following parameters

  • Name  Name of the request, we can use the task name on this field.
  • URL Since we don’t have an URL we can use anything we want here (it doesn’t need to be a valid URI) so we can use either Build or Release to know if the task was executed in a build or a request
  • Duration The execution time (if we want to track performance, otherwise use 0)
  • Success status If the request was successful or failed.
  • Result Code The result code (use anything you want, but you need to specify it, otherwise the request is ignored)


There is just one thing missing, how can we track usage across accounts, for that we can use properties that are sent in every event as custom dimensions. This would be the implementation of our task


async function run() {
    try {
        //do your actions

        let taskType = tl.getVariable("Release.ReleaseId") ? "Release Task" : "Build Task";
        client.commonProperties = {
            collection: tl.getVariable("system.collectionId"), projectId: tl.getVariable("system.teamProjectId")
        client.trackRequest({ name: "taskName", url: taskType, duration: 0, success: true, resultCode: "OK" });
    } catch (err) {
        tl.setResult(tl.TaskResult.Failed, err.message);
    } finally {


Notice we send the collection id and the team project id, these are GUID’s that are opaque, they do not reveal any client information and can’t be used to track them, but if you are worried, you can be extra cautious and pass them through a hash function for further anonymization.


Tracking using a custom Event


A custom event can be used to send data points to Application Insights; you can associate custom data with an event. You can use data that can be aggregated (to be viewed in metrics explorer) or data to available in Search. Both are queryable in Application Insights Analytics.

async function run() {
    try {
        //do your actions

        let taskType = tl.getVariable("Release.ReleaseId") ? "Release Task" : "Build Task";
        client.trackEvent({ "name""Task Execution""properties": { "taskType": taskType, "taskName""taskname""collection": tl.getVariable("system.collectionId"), "projectId": tl.getVariable("system.teamProjectId") } });
    } catch (err) {
        tl.setResult(tl.TaskResult.Failed, err.message);
    } finally {

We the event, I opted to use Task Execution for the event type name , it allows us to quickly count the number of task executions (regardless if it is in a release or a build context) and if we need to get the context where the task has been executed, we can get it from the taskType property.


Errors Telemetry


Finally we want to get error information, so we add the following code to the catch handler


catch (err) {
    client.trackException({ exception: err });
    tl.setResult(tl.TaskResult.Failed, err.message);


If we just send exception event, we will miss the task execution failures, so we need to call the event (either trackEvent or trackRequest) or we will not have usage data for task failures.

If we are using events this would be the code for that catch handler

catch (err) {
    client.trackException({ exception: err });
    let taskType = tl.getVariable("Release.ReleaseId") ? "Release Task" : "Build Task";
    client.trackEvent({ "name""Task Execution""properties": { "failed"true"taskType": taskType, "taskName""taskname""collection": tl.getVariable("system.collectionId"), "projectId": tl.getVariable("system.teamProjectId") } });
    tl.setResult(tl.TaskResult.Failed, err.message);

Noticed we added a failed property to the properties object

If we are using request events this would the catch handler

catch (err) {
    client.trackException({ exception: err });
    let taskType = tl.getVariable("Release.ReleaseId") ? "Release Task" : "Build Task";
    client.commonProperties = {
        collection: tl.getVariable("system.collectionId"), projectId: tl.getVariable("system.teamProjectId")
    client.trackRequest({ name: "taskName", url: taskType, duration: 0, success: false, resultCode: "Error" });
    tl.setResult(tl.TaskResult.Failed, err.message);

The only difference is that we are setting sucess property to false, so the request appears in the failed requests.


Sending Data


To make sure data is sent (the SDK batches data) we call the flush method on the finally handler to guarantee data is sent to Application Insights before the task execution finishes.

} finally {     client.flush();

Opting In/Opting out


Optionally you can allow users to either opt in or opt out via a task parameter and users can decide if they want to contribute to anonymous telemetry data.




Telemetry can be disabled via the disableAppInsights property of the client config property.

var client = appInsights.defaultClient;
client.config.disableAppInsights = !enabled;

Analyzing Data


After deploying the tasks with telemetry collection enabled we are now ready to analyze usage data.


We have several ways to visualize or analyze Application Insights data; we can use Azure Portal or Application Insights Analytics

Note: this is not a primer on Application Insights, it is just a glimpse of some ways to analyze the data collected from tasks.

Azure Portal


If you decided to go with the track request route, all executions are visible immediately on the Overview blade




If you decided to go the event’s route, You could get a similar graphic by opening Metrics explorer and configuring the chart to display Events and grouping it by Event Name.




You can also group by Operating System to see if tasks were executed on a windows, Linux or MacOS agent.

If you wish to see event details, just click on the search and click on the event [1] you wish to inspect




You can see on Custom data, the custom event data we have sent. The collection identifier, the project identifier, the name of the task and the where the task has been executed (Build in this case).


Application Insights Analytics


Application Insights Analytics provides you with a search and query tool to analyze Application Insights data.


Let’s start with the simplest query, list all the requests ordered by date




To get an idea how tasks executions over time and which platform the agent is running on, we can render the following chart (data is grouped in 5 min intervals)


However, we can use it to answer, more elaborate questions. Like in how many executions each task had in distinct team projects (we could also get how many different accounts, but it is easier to demonstrate with single account)




In the last six hours, the task called “taskName” has been executed 60 times in two different team projects.

VSTS: Generating badges for Releases

Visual Studio Team Services (and Team Foundation Server) supports build badges out of the box giving you an URL for an image that can be embedded in a web page (eg: a wiki or a GitHub page) to visualize the (always updated) status of a build.




Unfortunately Release Management doesn’t offer the same support for releases, fortunately it easy to implement a release badge generator with a few lines of code and using VSTS built-in extensibility mechanisms.


This is what I’m going to show you on this post, how I built a VSTS release badge generator using VSTS web hooks, an Azure Function to generate the badge and Azure Blog storage to store them practically for free (it will cost you at most some cents per month for a huge number of badges/accesses).

 badge example




We want a system that:

  • Is Fast
  • Free or very very cheap to run
  • Has very few moving parts
  • Doesn’t require access to VSTS data
  • Doesn’t require maintenance or management
  • Is stateless, don’t want to manage data besides the badge itself.


The architecture consists of a system that statically generates the badge every time a release is deployed, only if there is a new release the badge is regenerated.

This means when a users sees a badge, it’s only accessing a static file (no computation is needed, so no extra costs are incurred. Computation costs are much higher than storage costs) , the badge is not generated in real and there is no need to have access to VSTS, meaning not only there is no need to manage credentials to access VSTS, the code is simpler and specially there is no attack surface against VSTS via the generator.

Since the files are statically generated and accessed via HTTP/HTTPS they can be cached (in proxies and in browsers) improving access speed to badges and saving bandwidth (and storage costs), by default the cache is configure for a low value but it can be configured to be adjusted to different release (expected) frequencies.


This means the we can generate badges while being decoupled from VSTS, generate badges from many team projects (and even different accounts) without having access to them, the only thing we need is to provide the endpoint of the generator and configure a web hook.


It is an event drive architecture, with all the advantages and disadvantages. Smile


This design has some drawbacks:

  • We only have badges after the the generator is configured and after a release is deployed, it doesn’t work with past deploys. If you can’t wait for the next release you can force a redeploy just to generate the badge.
  • The badges are totally decoupled from the release definition and environments. If release definition or a environment is deleted the badges are not deleted.
    • We could have a process to clean up orphaned badges, but for simplicity we just leave them alone (storage is cheap) Smile




An Azure account to host the code and at least one VSTS account to generate the badges for.

Service Hooks


VSTS Service Hooks allows integrating VSTS with other applications, every time an certain event happens in VSTS (a work item has changed, a build has started, pull request created, and all other sort of events) the application is notified in near real time.


Service hooks can be used to integrate out of the box with services like AppVeyor, Azure App Service, Campfire, Jenkins, Microsoft Teams , Slack, Office 365 or ZenDesk just to name a few. In our case we are interested in a generic integration, so we are going to use a Web Hook to receive an event every time a  release deployment is finished.


We can create one web hook [1] with Release deployment completed [2] event per release definition (if we wish to generate release badges only for some releases or environments) or a web hook for any release definition [3] (this will generate a badge for all releases in any environment).




The event has a JSON payload, the schema of the event varies with the type of event we are receiving, the Release deployment completed  event contains all the information we need to generate a badge, it contains among other things

  • Release definition id/name
  • The release number
  • The environment


With a (near) real time event every time a release is deployed we just need to generate the badge based on the event payload.


Generate the badge using Azure Function


We need to generate the badge every time an event is received, an Azure Function is very suitable for this task. Some of its advantages (among others)

  • Its very cheap to run (see hosting plan comparisons)
    • If you already have an azure app services you can host it there (since you are already paying for it) so no extra costs
    • Pay per use, pay only what you use with a generous free monthly cap (which means you can run it for free unless you have big number of deployments occurring)
  • No need to manage infrastructure or care about scalability.
  • First class with Azure components, we wan use other Azure resources with only a few lines of code or declaratively.


We need to implemented a function, that upon receiving a JSON payload via HTTP/HTTPS, generates a badge based on the event data and store it somewhere where is publically accessible via HTTP/HTTPS.


I’m not going into the details of creating an Azure Function, there is plenty of information available online:


We need to implement an Azure function that is triggered by an HTTP request and when the event is received we need to generate the badge based on the data (we will generate a badge with name of the release definition and the release number).



Generating the badge


We can either manipulate images files to generate the badge image, use some external library or use an external service to do the heavy lifting for us.


I’ve chosen the latter and used Shields IO service to generate badges. Shields IO is a service that can generate badges in several image formats (jpeg, gif, svg,…) for dozens of services (eg: Travis, AppVeyor, CircleCI,etc)  but it can also generate generic badges and that what we will use to generate our badges with a single HTTP call.


Storing the generated badges


After the badge has been generated, we need to store it somewhere so people in general can access it. Azure Storage is a natural choice:

  • It’s dirty cheap
  • Badges can be accessed via HTTP/HTTPS anywhere in the world
  • It’s fast (files are pre generated) and you can control time to live on browser caches to save bandwidth costs.
  • It has built in replication mechanisms to guarantee durability and high availability.


So we will use Azure Storage Blobs, all generated badges will be stored on a container.


Writing a file from an Azure function is only a few lines of code, we could be using Azure Functions declaratively capabilities to write the file into azure storage with no code at all, but we want to control no only the storage container and file path, we also want to have a more fine grained browser cache directives. Even so writing the file to storage is just a file lines of code


The container where the badges are stored only one to fill one requirement, anonymous access must be enabled.






By default the functions are set to use function level security (if you use the Get Function URL from the portal, the default code is automatically added to the URL)




If you wish to have subscriptions from multiple accounts or team projects, you may want to have different keys so you can revoke them if needed without disrupting all other subscriptions. It is advisable you set up multiple keys.


Learn how to work with function or host keys


Be aware that anyone that you share the function URL with, can flood the function with fake events, not only generating fake data but also make you (potentially) incur into extra costs (compute and storage)


Show me the code


You must be wondering, so much talk and practically no details at all how this was implemented, I haven’t gone into details since the code is quite simple and it has plenty of comments.


The solution is not future proof, for example it doesn’t deal with failures or unavailability of Shields IO service (transient failures will not prevent badge generation, since Service Hooks built-in retry mechanism will take care of that).


The code works and can be used in production, but it has been written as a learning experimenting since I wanted to to try some things with Azure Function and at the same time produce something useful. Smile


All the code is available as Open Source with a MIT license on GitHub, the repo contains not only the source code but also a PowerShell script (which uses an ARM template) to automatically provision and configure the Azure Function. Setting up Web Hooks subscriptions and deploying the code are manual tasks


The readme file on the repo has all instructions (way longer that this post) how to provision, deploy code and configure Web Hooks as well as parameterize the generator (different cache settings, badge styles, etc.).


Provisioning the function and deploying the generator is very easy to automate using VSTS release management. It can automated with only two tasks





VSTS Extensions: Improving load time by concatenating modules




When implementing a Visual Studio Team Service (or TFS) extension it is a good practice to split your code among several modules and load them dynamically with the provided AMD loader using the require/define function to load modules asynchronously.


While having one file per module, improves code legibility and reuse it can have some performance penalty to load many files separately.


In this post I will show you a technique where you can still have all the benefits of organizing your code in several files (one per module) with all the benefits that this brings and at the same time improve your load time by combining your modules into a single file.


This technique will not impact the way you write your modules or your coding flow, it will just require a tiny change the way you declare your module(s).


Visual Studio Team Service also performs something similar for it’s modules. It does bundling for modules and CSS. I’m not aware if it’s something that it’s done automatically at runtime or pre calculated on deploy. Whatever technique is used, it’s something that is not done automatically for extensions. With this technique you can also benefit from bundling in your own modules.


Combining all modules files into a single has the benefit of reducing load time, since it means your browser will just need one connection to load your file; granted the file is bigger (but you are going to load all the files anyway and it’s just the first load, after that the file is cache anyway) but since it’s only one there will be no need to open multiple connections. Combine this with file minification and your gains can be rather nice (with no development costs and without sacrificing coding legibility).


When support for HTTP/2 is widespread, module concatenation will be probably become unnecessary, but in the meanwhile this technique can be successfully used to improve load times with minimal effort.


Loading modules in an extension


In order to load modules (either out of the box modules or your own modules) an extension/module uses the require function (or VSS.Require if doing in the “main” html file) or define if you are defining a module (and depends on other modules))


define(["require","exports", "VSS/Utils/Core", "VSS/Controls", "VSS/Controls/Menus"],

       function (require, exports, Core, Controls, MenuControls) {


Example of using define from Extensions sample repository on GitHub


When you use define,  you are basically stating that you are going to define your module and have external module(s) as dependencies (the name of the modules are passed as an array). The dependencies will be loaded asynchronously and when all required modules are load your callback will be called (with a references to the required modules).


This means you can keep your code nice and tidy but it also means (a small) performance hit since multiple connections will be opened to fetch the modules separately.


Combining Modules


By combining all your modules (that are needed) into a single file, your are effectively reducing the need to open multiple connections thereby reducing load times. The benefit will be mainly on extensions that have many modules or are used in low speed/high latency connections.


Below you can see an example of an extension I’m developing. I will show the impact of deploying and extension with 3 single files (one per module) and a deploy with just a single file (will all modules concatenated).


This extension has three modules (all are used in the main file); I have captured the loading of the files on Chrome dev tools network view.


They are exactly the same code and no changes have been made to the “two” versions, they have just been deployed differently. One with 3 files and the the other with one file with the combination (in a certain order) of all the 3 modules.


These timings were collected a single time and using a 3G connection mobile in a place where 3G is spotty at best.


Using 3 separate files




Three files were fetched with a total download size of 25.5 Kb took 2562 ms


Using a single File




A single file was fetch with a download size of 23.3 kb took 1730 ms


The difference in download size is because in chrome the size column, not only shows the size of file itself but all the content downloaded on the wire (headers and all) so as a bonus we also save a few bytes in overhead.


Observed savings


This is hardly scientific since It’s impossible to guarantee equal conditions in a non lab environment, but I’ve executed five runs with one file and five runs with three files and there was 36.8% decrease in load time (on higher speeds/less latency this is will probably be less noticeable).


The runs were all executed in chrome and using a flaky 3G mobile connection (slow high latency connection to make improvements more visible), hardly scientific so take this with a grain of salt. But the results are consistent and the standard deviation are in the same order of magnitude (but I’m hardly a statistician Smile)


Determining concatenation order of modules

How can you do this for your own modules?


The first thing you need to do, is to determine what is the name of the file that will hold the result of the modules concatenation, the entry point so to speak.


This is the file that will first be loaded, so you will use the file that is loaded in your html file that is defined in the uri property of your contribution.


In this particular case, this is what I have in the html file (snippet)


<script type="text/javascript">
            usePlatformScripts: true,
            usePlatformStyles: true,
            moduleLoaderConfig: {
                paths: {
                    "scripts/": "scripts/"

        // Wait for the SDK to be initialized
        VSS.ready(function () {
            require(["scripts/TagsManage"], function (TagsManage) {


so all my modules code will be placed in the TagsManage.js file; now we just need to determine the order of this file content.


The current content of the file, should be at the end of the file.


Why? Because the TagsManage module basically requires other modules, so it should be last that is defined. More specifically it uses TagContent and TagApi modules.


If we look again at the network tag, we can see that




That the modules have been requested in order TagsManage, TagContent and TagApi (this typically means they should be concatenated in the reverse order they have been loaded)


However we should mentally build the dependency graph of the modules and concatenate them in reverse order, from the leaves to upper nodes.


In this particular extensions this is the dependency graph




TagsManage depends on both TagContent and TagAPI  ; TagContent depends on TagApi


So TagsManage (concatenated) = TagApi + TagContent + TagsManage


In this particular order; so what happens is when TagContent is defined, TagApi has been defined previously so there is no need to fetch it externally, the loader will just use it without fetching it.


Needed changes for module definition


There are some minor changes needed to the modules itself. The modules, need to be named.


Typically your module is something like (in this case, this is the definition of TagsContent)


define["require", "exports", "jQueryUI/draggable", "jQueryUI/droppable",
       "VSS/VSS", "VSS/Service", "VSS/Utils/Core", "VSS/Controls",
       "VSS/Service", "TFS/WorkItemTracking/RestClient",
    function (require, exports, jDrag, jDrop, VSS_Platform, VSS_Service, VSS_CORE, Controls, VSS_Service, TFS_WIT_WebApi, TagApi) {…..


The first parameter of define, contains an array of your dependencies, notice (in bold) that we take a dependency on scripts/TagApi.


This works fine in VSS.require since in VSS.init we specify the path to the scripts prefix.


           usePlatformScripts: true,
           usePlatformStyles: true,
           moduleLoaderConfig: {
               paths: {
                   "scripts/": "scripts/"


But this doesn’t work in define, since define isn’t aware what scripts prefix is, we could use just TagApi which means the TagApi module would be fetched from the same path as the module we are defining has been loaded.


But we want to avoid that, since that is exactly what we are trying to avoid, loading an external file.


Remember we stated previously that concatenation order is TagsApi first then TagContent then TagsManage. So TagApi has been already defined, we just need to give it a name. So if the original definition TagApi module is


define( ["require", "exports", "VSS/VSS",
	 "VSS/Controls", "VSS/Authentication/Services"],
    function (require, exports, VSS_Platform, Controls, VSS_Auth_Service) {


We will give it a name (use first parameter of define to give it a name and the array of dependencies is now the second parameter). The definition now becomes


define( "scripts/TagApi",
	["require", "exports", "VSS/VSS",
	 "VSS/Controls", "VSS/Authentication/Services"],
    function (require, exports, VSS_Platform, Controls, VSS_Auth_Service) {


Do this for all your modules (use scripts or whatever you want).

Now the modules will continue working as one file per module or as a single concatenated file as defined in the previous section.


After giving all module names, you are now ready to incorporate the concatenation into your development process (see next section)


To summarize this is the way TagsManage.js file looks like (omitting the content of the modules)




Automate concatenation into your development process


The concatenation should be integrated into your build process, so the files are concatenated right before they are packaged and uploaded into an account or the marketplace.


It really depends how are you doing your builds. I use Grunt to automate everything (can do everything inside Visual Studio or Visual Studio Code and use the same process in Visual Studio Build using the out of the box Grunt task), other people use Gulp or any other of JavaScript build tools out there.


In my particular case, I minify the files and they concatenate them the define other using grunt-contrib-concat which can do a lot of other stuff besides dummy concatenation and then package the files using tfx

Sharing secrets among Team Build and/or Release Management definitions


When creating a team build 2015 (often called build v.next) or a release management definition, you can store secrets in variable (for example for passing a password or a token to a task).

Secret variables cannot be seen when viewing/editing a build definition (hence the name secret), only changed.

Secret variables have some disadvantages

  • Their scope is limited to the build/release definition, if you want to use the same secret among several definitions you have to replicate them among the definitions
  • You can’t apply security to variables, if a user has permission to edit a build definition he can change the secret.

In order to overcome these two disadvantages I have created a build/release task that can be used to have a single instance of a secret (per team project) and that can be used to pass secrets to tasks.

The task is called Set Variables with Credential and it’s part of a bigger package called Variable Tasks Pack which contains other tasks that deals with variables, it contains the following tasks

  • Set Variable Set a variable value and optionally apply a transformation to it.
  • Set Variables with Credential Sets the username/password from an existing service endpoint
  • Set Variable from JSON Extracts a value from JSON using JSONPath
  • Set Variable from XML Extracts a value from XML using XPath
  • Update Build Number Allows you to change a build number
  • Increment Version Increments a semver version number

But the topic of this post is how to share secrets among multiple definitions.

When you install the Variable Tasks Pack in your VSTS account from the marketplace, the extension will also configure a new Service Endpoint type, called Credential



This will allow you to store on your team project a username and a password (don’t worry password it’s just a name you can use it to store any secret, for example I used it to store a Personal Access token to publish extensions on the Visual Studio Marketplace). The connection name it’s just a label you can use to give it a meaningful name.

Using a Service endpoint gives you the possibility to define permissions (per credential) by adding users/groups to the Endpoint Readers



This will allow you to define which users have permission to use a given credential in a build/release definition. This means people with permissions to edit build/release definitions are not able to change secrets and only use the ones they are allowed to (you can also add users to Endpoint administrator to define who can edit the credential endpoint).

After you have defined your credential(s) you can use then on your build/release definitions, the provided task has three parameters.

  • The name of the connection
  • The name of the variable where the username will be stored (optional)
  • The name of the variable where the password will be stored (optional).

This will set the value of a regular variable, that can be used in other tasks like if it was a defined variable.

For example, if you set in the “Password variable name” the value MyVar


You can use it on subsequent task, like any other variable (Eg: $(MyVar) )


With this task not only you can control who can change/use a given secret, it is also possible to have a central place where the secret is stored and if you update it all definitions that use it will pick up the change immediately and you don’t have to update all tasks manually as if you used a secret variable.

Visual Studio Online hubot scripts replies formatting

I have previously blogged about using Hubot with Visual Studio Online on:

Using Hubot with Visual Studio Online

Using Hubot with Visual Studio Online Team Rooms

Running Hubot for Visual Studio Online on an Azure Web Site

Hubot scripts are independent of the chat room type that is managing and can be executed regardless where they are running.

However sometimes they are more or less tailored to work better with some adapters.

This is the case with Visual Studio Online hubot scripts, which works better with Visual Studio Online team rooms. For example work items numbers are preceded with a # symbol (since team rooms automatically convert them into a link to a work item (but it also sends an hyperlink in plaintext for other adapters)).

Version 0.3.1 adds a small feature, which allows you to configure the format of the messages sent to the chat room, in case the chat room you are using supports a more rich format.

It supports three formats

  • Plaintext (default) – Send the messages and links in plaintext
  • HTML – The links are properly formatted which has a better experience for users
  • Markdown – The links are properly formatted which has a better experience for users

In order to define the format, set the variable HUBOT_VSONLINE_REPLY_FORMAT with (case sensitive) the value plaintext or html or markdown