VSTS Extensions: Improving load time by concatenating modules


Facebooktwitterlinkedin

 

Introduction

 

When implementing a Visual Studio Team Service (or TFS) extension it is a good practice to split your code among several modules and load them dynamically with the provided AMD loader using the require/define function to load modules asynchronously.

 

While having one file per module, improves code legibility and reuse it can have some performance penalty to load many files separately.

 

In this post I will show you a technique where you can still have all the benefits of organizing your code in several files (one per module) with all the benefits that this brings and at the same time improve your load time by combining your modules into a single file.

 

This technique will not impact the way you write your modules or your coding flow, it will just require a tiny change the way you declare your module(s).

 

Visual Studio Team Service also performs something similar for it’s modules. It does bundling for modules and CSS. I’m not aware if it’s something that it’s done automatically at runtime or pre calculated on deploy. Whatever technique is used, it’s something that is not done automatically for extensions. With this technique you can also benefit from bundling in your own modules.

 

Combining all modules files into a single has the benefit of reducing load time, since it means your browser will just need one connection to load your file; granted the file is bigger (but you are going to load all the files anyway and it’s just the first load, after that the file is cache anyway) but since it’s only one there will be no need to open multiple connections. Combine this with file minification and your gains can be rather nice (with no development costs and without sacrificing coding legibility).

 

When support for HTTP/2 is widespread, module concatenation will be probably become unnecessary, but in the meanwhile this technique can be successfully used to improve load times with minimal effort.

 

Loading modules in an extension

 

In order to load modules (either out of the box modules or your own modules) an extension/module uses the require function (or VSS.Require if doing in the “main” html file) or define if you are defining a module (and depends on other modules))

 

define(["require","exports", "VSS/Utils/Core", "VSS/Controls", "VSS/Controls/Menus"], 

       function (require, exports, Core, Controls, MenuControls) {

 

Example of using define from Extensions sample repository on GitHub

 

When you use define,  you are basically stating that you are going to define your module and have external module(s) as dependencies (the name of the modules are passed as an array). The dependencies will be loaded asynchronously and when all required modules are load your callback will be called (with a references to the required modules).

 

This means you can keep your code nice and tidy but it also means (a small) performance hit since multiple connections will be opened to fetch the modules separately.

 

Combining Modules

 

By combining all your modules (that are needed) into a single file, your are effectively reducing the need to open multiple connections thereby reducing load times. The benefit will be mainly on extensions that have many modules or are used in low speed/high latency connections.

 

Below you can see an example of an extension I’m developing. I will show the impact of deploying and extension with 3 single files (one per module) and a deploy with just a single file (will all modules concatenated).

 

This extension has three modules (all are used in the main file); I have captured the loading of the files on Chrome dev tools network view.

 

They are exactly the same code and no changes have been made to the “two” versions, they have just been deployed differently. One with 3 files and the the other with one file with the combination (in a certain order) of all the 3 modules.

 

These timings were collected a single time and using a 3G connection mobile in a place where 3G is spotty at best.

 

Using 3 separate files

 

SNAGHTML396c0fd

 

Three files were fetched with a total download size of 25.5 Kb took 2562 ms

 

Using a single File

 

SNAGHTML39aa7f3

 

A single file was fetch with a download size of 23.3 kb took 1730 ms

 

The difference in download size is because in chrome the size column, not only shows the size of file itself but all the content downloaded on the wire (headers and all) so as a bonus we also save a few bytes in overhead.

 

Observed savings

 

This is hardly scientific since It’s impossible to guarantee equal conditions in a non lab environment, but I’ve executed five runs with one file and five runs with three files and there was 36.8% decrease in load time (on higher speeds/less latency this is will probably be less noticeable).

 

The runs were all executed in chrome and using a flaky 3G mobile connection (slow high latency connection to make improvements more visible), hardly scientific so take this with a grain of salt. But the results are consistent and the standard deviation are in the same order of magnitude (but I’m hardly a statistician Smile)

 

Determining concatenation order of modules

How can you do this for your own modules?

 

The first thing you need to do, is to determine what is the name of the file that will hold the result of the modules concatenation, the entry point so to speak.

 

This is the file that will first be loaded, so you will use the file that is loaded in your html file that is defined in the uri property of your contribution.

 

In this particular case, this is what I have in the html file (snippet)

 

<script type="text/javascript">
        VSS.init({
            usePlatformScripts: true,
            usePlatformStyles: true,
            moduleLoaderConfig: {
                paths: {
                    "scripts/": "scripts/"
                }
            }
        });

        // Wait for the SDK to be initialized
        VSS.ready(function () {
            require(["scripts/TagsManage"], function (TagsManage) {

 

so all my modules code will be placed in the TagsManage.js file; now we just need to determine the order of this file content.

 

The current content of the file, should be at the end of the file.

 

Why? Because the TagsManage module basically requires other modules, so it should be last that is defined. More specifically it uses TagContent and TagApi modules.

 

If we look again at the network tag, we can see that

 

SNAGHTML396c0fd

 

That the modules have been requested in order TagsManage, TagContent and TagApi (this typically means they should be concatenated in the reverse order they have been loaded)

 

However we should mentally build the dependency graph of the modules and concatenate them in reverse order, from the leaves to upper nodes.

 

In this particular extensions this is the dependency graph

 

image

 

TagsManage depends on both TagContent and TagAPI  ; TagContent depends on TagApi

 

So TagsManage (concatenated) = TagApi + TagContent + TagsManage

 

In this particular order; so what happens is when TagContent is defined, TagApi has been defined previously so there is no need to fetch it externally, the loader will just use it without fetching it.

 

Needed changes for module definition

 

There are some minor changes needed to the modules itself. The modules, need to be named.

 

Typically your module is something like (in this case, this is the definition of TagsContent)

 

define["require", "exports", "jQueryUI/draggable", "jQueryUI/droppable", 
       "VSS/VSS", "VSS/Service", "VSS/Utils/Core", "VSS/Controls", 
       "VSS/Service", "TFS/WorkItemTracking/RestClient", 
       "scripts/TagApi"],
    function (require, exports, jDrag, jDrop, VSS_Platform, VSS_Service, VSS_CORE, Controls, VSS_Service, TFS_WIT_WebApi, TagApi) {…..

 

The first parameter of define, contains an array of your dependencies, notice (in bold) that we take a dependency on scripts/TagApi.

 

This works fine in VSS.require since in VSS.init we specify the path to the scripts prefix.

 

VSS.init({
           usePlatformScripts: true,
           usePlatformStyles: true,
           moduleLoaderConfig: {
               paths: {
                   "scripts/": "scripts/"
               }
           }
       }); 

 

But this doesn’t work in define, since define isn’t aware what scripts prefix is, we could use just TagApi which means the TagApi module would be fetched from the same path as the module we are defining has been loaded.

 

But we want to avoid that, since that is exactly what we are trying to avoid, loading an external file.

 

Remember we stated previously that concatenation order is TagsApi first then TagContent then TagsManage. So TagApi has been already defined, we just need to give it a name. So if the original definition TagApi module is

 

define( ["require", "exports", "VSS/VSS", 
	 "VSS/Controls", "VSS/Authentication/Services"],
    function (require, exports, VSS_Platform, Controls, VSS_Auth_Service) {

 

We will give it a name (use first parameter of define to give it a name and the array of dependencies is now the second parameter). The definition now becomes

 

define( "scripts/TagApi",
	["require", "exports", "VSS/VSS", 
	 "VSS/Controls", "VSS/Authentication/Services"],
    function (require, exports, VSS_Platform, Controls, VSS_Auth_Service) {

 

Do this for all your modules (use scripts or whatever you want).

Now the modules will continue working as one file per module or as a single concatenated file as defined in the previous section.

 

After giving all module names, you are now ready to incorporate the concatenation into your development process (see next section)

 

To summarize this is the way TagsManage.js file looks like (omitting the content of the modules)

 

define("scripts/TagApi",……
define("scripts/TagContent",…..
define("scripts/TagsManage",…..

 

Automate concatenation into your development process

 

The concatenation should be integrated into your build process, so the files are concatenated right before they are packaged and uploaded into an account or the marketplace.

 

It really depends how are you doing your builds. I use Grunt to automate everything (can do everything inside Visual Studio or Visual Studio Code and use the same process in Visual Studio Build using the out of the box Grunt task), other people use Gulp or any other of JavaScript build tools out there.

 

In my particular case, I minify the files and they concatenate them the define other using grunt-contrib-concat which can do a lot of other stuff besides dummy concatenation and then package the files using tfx

Facebooktwitterlinkedin

Sharing secrets among Team Build and/or Release Management definitions


Facebooktwitterlinkedin

 

When creating a team build 2015 (often called build v.next) or a release management definition, you can store secrets in variable (for example for passing a password or a token to a task).

Secret variables cannot be seen when viewing/editing a build definition (hence the name secret), only changed.

Secret variables have some disadvantages

  • Their scope is limited to the build/release definition, if you want to use the same secret among several definitions you have to replicate them among the definitions
  • You can’t apply security to variables, if a user has permission to edit a build definition he can change the secret.

In order to overcome these two disadvantages I have created a build/release task that can be used to have a single instance of a secret (per team project) and that can be used to pass secrets to tasks.

The task is called Set Variables with Credential and it’s part of a bigger package called Variable Tasks Pack which contains other tasks that deals with variables, it contains the following tasks

  • Set Variable Set a variable value and optionally apply a transformation to it.
  • Set Variables with Credential Sets the username/password from an existing service endpoint
  • Set Variable from JSON Extracts a value from JSON using JSONPath
  • Set Variable from XML Extracts a value from XML using XPath
  • Update Build Number Allows you to change a build number
  • Increment Version Increments a semver version number

But the topic of this post is how to share secrets among multiple definitions.

When you install the Variable Tasks Pack in your VSTS account from the marketplace, the extension will also configure a new Service Endpoint type, called Credential

image

image

This will allow you to store on your team project a username and a password (don’t worry password it’s just a name you can use it to store any secret, for example I used it to store a Personal Access token to publish extensions on the Visual Studio Marketplace). The connection name it’s just a label you can use to give it a meaningful name.

Using a Service endpoint gives you the possibility to define permissions (per credential) by adding users/groups to the Endpoint Readers

 

image

This will allow you to define which users have permission to use a given credential in a build/release definition. This means people with permissions to edit build/release definitions are not able to change secrets and only use the ones they are allowed to (you can also add users to Endpoint administrator to define who can edit the credential endpoint).

After you have defined your credential(s) you can use then on your build/release definitions, the provided task has three parameters.

  • The name of the connection
  • The name of the variable where the username will be stored (optional)
  • The name of the variable where the password will be stored (optional).

This will set the value of a regular variable, that can be used in other tasks like if it was a defined variable.

For example, if you set in the “Password variable name” the value MyVar

image

You can use it on subsequent task, like any other variable (Eg: $(MyVar) )

image

With this task not only you can control who can change/use a given secret, it is also possible to have a central place where the secret is stored and if you update it all definitions that use it will pick up the change immediately and you don’t have to update all tasks manually as if you used a secret variable.

Facebooktwitterlinkedin

Visual Studio Online hubot scripts replies formatting


Facebooktwitterlinkedin

I have previously blogged about using Hubot with Visual Studio Online on:

Using Hubot with Visual Studio Online

Using Hubot with Visual Studio Online Team Rooms

Running Hubot for Visual Studio Online on an Azure Web Site

Hubot scripts are independent of the chat room type that is managing and can be executed regardless where they are running.

However sometimes they are more or less tailored to work better with some adapters.

This is the case with Visual Studio Online hubot scripts, which works better with Visual Studio Online team rooms. For example work items numbers are preceded with a # symbol (since team rooms automatically convert them into a link to a work item (but it also sends an hyperlink in plaintext for other adapters)).

Version 0.3.1 adds a small feature, which allows you to configure the format of the messages sent to the chat room, in case the chat room you are using supports a more rich format.

It supports three formats

  • Plaintext (default) – Send the messages and links in plaintext
  • HTML – The links are properly formatted which has a better experience for users
  • Markdown – The links are properly formatted which has a better experience for users

In order to define the format, set the variable HUBOT_VSONLINE_REPLY_FORMAT with (case sensitive) the value plaintext or html or markdown

Facebooktwitterlinkedin

Visual Studio Online hubot scripts updated to support API V1.0


Facebooktwitterlinkedin

I have previously blogged about using Hubot with Visual Studio Online on:

Using Hubot with Visual Studio Online

Using Hubot with Visual Studio Online Team Rooms

Running Hubot for Visual Studio Online on an Azure Web Site

A few weeks ago Visual Studio Online REST API has reached V1.0 milestone, this means it has left preview mode and that the preview API is now deprecated. It still available and works as is, but it will be eventually removed from the product.

This mean that Visual Studio Online hubot scripts should be updated to use version 1.0 of the API.

My pull request to make the scripts use V1.0 of the API has been accepted (version 0.3.0)

You just need to update the scripts (manually or using nom) and it’s dependencies (which should be automatic if you use npm.

If you are using OAuth you will also need to recreate the application (since we now have a more granular set of permissions and the script now ask for a reduced set or permissions).

Note: since you can’t change existing permissions you will need to delete the application on Visual Studio Online and create a new one

These are the required authorized scopes.

image

After that reconfigure the application id and the secret on the configuration and you are all set.

Users will need to re authorized hubot, but don’t hurrym if re authorization is needed, the scripts will automatically detect this and ask the user to authorize again, you don’t need to reconfigure anything else.

If you are using Hubot with Visual Studio Team Rooms, Hubot adapter for Visual Studio Online has also been updated to use V1.0 so you should also updated.

Facebooktwitterlinkedin

Running Hubot for Visual Studio Online on an Azure Web Site


Facebooktwitterlinkedin

 

This is the third and last post on this series.

On the first post Using Hubot with Visual Studio Online I showed you, how you could install Hubot on a un*x box (I used an Azure virtual machine) and run commands against a Visual Studio Online account from a campfire chat room, by installing Hubot Scripts for Visual Studio Online

On the second post Using Hubot with Visual Studio Online Team Rooms I showed, how you could connect the same install to a Visual Studio Online Team Room instead of campfire, by using Hubot Adapter for Visual Studio Online

On this post I will explain, what you need to run hubot that is connected to one (or more) Visual Studio Online team room(s) in order to respond to commands; but running on an Azure web site instead of an un*x box.

Since Visual Studio Online uses notification events it is very suited for running on a Azure Web Site with optimized resources.

Running Hubot on an azure web site, has several advantages over running it on a nodejs process in virtual Machine:

  • It is cheaper. You can run Hubot on an free Azure web site. It’s hard to beat free. Smile
  • You don’t need to buy an SSL certificate (it’s not mandatory, but it is recommended that you use secure communications between visual studio online and your Hubot instance) since azure web sites support SSL out of the box
  • Unlike a VM it doesn’t require any administration

Nodejs running in an azure web site is not executed as standalone nodejs process, it is hosted on IIS. This means that IIS manages hubot process lifetime so it can kill/unload it (no worries, it will wake up as soon as it receives an event) as it sees fit. This means that if you are using the “join room feature” Hubot may not be visible on the room, EVEN though he will respond to commands if properly configured.

Before continuing, let introduce n Hubot concept I’ve not mentioned in previous posts. The brain.

Hubot brain, is an abstraction which represents a persistent storage mechanism that Hubot (and it’s scripts) can use to store data and state. The default out of the box mechanism used by Hubot is Redis.

Azure doesn’t has a redis service (it supports redis cache, but we need a persistent mechanism), so we need something to replace it (we could use a VM but that would beat the purpose of using an Azure web site Smile)

We will use Hubot azure scripts which provides an implementation of an Hubot brain, which uses Azure Storage Blob to store it’s data.

Installing hubot azure scripts, is quite simple. You just need to install it using npm install hubot-azure-scripts. Configure the brain in hubot-scripts.json  and then set a few environment variables, so the scripts can connect to your azure blob account.

I will not describe the steps to install and configure an hubot instance connected to Visual Studio Online team rooms, since the procedure is thoroughly described in the installation docs.

Facebooktwitterlinkedin