Skip to content
/

This post will be about a new sample of using “Sandboxable”.
We will walk through the steps to create a Microsoft Dynamics CRM plug-in that on deletion of any record, stores the deleted data as a file on Azure blob storage.

This post will be about a new sample of using . I wrote about .

In this post, we will walk through the steps to create a Microsoft Dynamics CRM plug-in that on deletion of any record, stores the deleted data as a file on Azure blob storage.

When using Azure blobs to store data, you should enable .

As usual, you will find the links to the complete source code at the end of this post.

Setting up the project

For this sample, the steps for are the same as the steps described in the previous post. So, I won’t list them here again.

Writing the plug-in

I’ve based the plug-in code on the MSDN article .

Getting the deleted entity

To get the details about the deleted record we need to get them from the . The registration of a Pre-Image will be described later in this post.

Entity entity = context.PreEntityImages["Target"];

Getting the connection details

To connect to Azure blob storage, you need:

  1. The storage account name
  2. One of the storage account access keys

For this sample, we’ll use a JSON string stored in the secure storage property of the plug-in step.
To deserialize these settings we use JsonConvert with a nested PluginSettings class.

PluginSettings pluginSettings =
                    JsonConvert.DeserializeObject<PluginSettings>(this.secureString);

Initializing the CloudBlobClient

The offers an easy way to manage and use all Azure blob storage related resources.
To initialize this class, we need to provide the URL and the .

StorageCredentials storageCredentials =
              new StorageCredentials(pluginSettings.AccountName, pluginSettings.Key);
Uri baseUri = new Uri($"https://{pluginSettings.AccountName}.blob.core.windows.net");
CloudBlobClient blobClient = new CloudBlobClient(baseUri, storageCredentials);

Creating a root container

First we’ll make sure there is a container to store all the contents generated by this plug-in.

With the blob client, we can create a reference to the with the name that is stored in the constant named FolderName.
To make sure the container exists, we call the which ensures us if there isn’t a container present yet, it’ll be created for us at that moment.

CloudBlobContainer container = blobClient.GetContainerReference(FolderName);
container.CreateIfNotExists();

Creating an entity directory

Now we want to create a directory inside the container to store entity specific records.

CloudBlobDirectory entityDirectory = 
                            container.GetDirectoryReference(entity.LogicalName);

Directories differ from containers because they don’t exist on disk. That’s why we don’t need to check if the directory already exists. If you’re interested in more details, I recommend the article by John Atten.

Adding a blob to the directory

Just like the container and the directory before, we also need to create a reference for the blob.
On the blob directory ask for a reference to the using the .

string fileName = entity.Id.ToString("N") + ".json";
CloudBlockBlob blob = entityDirectory.GetBlockBlobReference(fileName);

We won’t create the blob immediately because we want to add some details about the content we’re about to store. This can be done by setting the Properties and Metadata properties of the blob.
One of the we want to set is the . This will allow other systems to recognize the file correctly as a JSON file.

blob.Properties.ContentType = "application/json";

However, the BlobProperties only contains a fixed set of properties.
By using the on the blob allows us to store custom metadata with the blob.

blob.Metadata["userid"] = context.UserId.ToString("B").ToLowerInvariant();
blob.Metadata["userfullname"] = fullName;
blob.Metadata["deletiondate"] = context.OperationCreatedOn.ToString("O");

Now it’s time to write some file contents to the blob. We create blob data, using the context of the current plug-in execution. For demonstration purposes we serialize the JSON with the option set to Indented.
Because the blob content is plain text, we can use the .

var blobData = new
  {
    context.UserId,
    FullName = fullName,
    context.MessageName,
    entity.LogicalName,
    entity.Id,
    entity.Attributes
  };
blob.UploadText(JsonConvert.SerializeObject(blobData, Formatting.Indented));

Now build the project so we can proceed.

Register the plug-in assembly

Now the freshly baked assembly needs to be registered on the server.
The steps to do this are outside the scope for this post, but more information can be found in the .

Register the plug-in step for an event

To test the plug-in, we’ll register it asynchronously on the deletion event of every entity.

Dependencies on external resources should never be part of a synchronous pipeline.

In the Secure Configuration property, we set the value with the JSON object containing the connection information:

{
  "AccountName":"loremipsum",
  "Key":"DDWLOREM...IPSUMr0A=="
}

(obviously, these values do not represent real data)

Register the plug-in step for the deletion event of any entity

Because we are working with the deletion event we need to register a Pre-Image to capture the values of all attributes before the actual deletion took place.
We set the value of the Name and the Entity Alias properties to Target

Register a new image for the plug-in step for the deletion event of any entity

Testing the plug-in

We delete a contact in CRM. In my case the contact is called Sample User.

After a couple of seconds, we see the following container and directories appear on the storage account:
The "samplecrmfolder" container with multiple directories for different entities
Screenshot from

Opening the file in the contact directory, shows us some familiar content:

{
  "UserId": "d617a1a0-359a-e411-9407-00155d0ae259",
  "FullName": "Lorem Ipsum",
  "MessageName": "Delete",
  "LogicalName": "contact",
  "Id": "6e843a34-91b1-e611-80e4-00155d0a0b40",
  "Attributes": [
    {
      "Key": "firstname",
      "Value": "Sample"
    },
    {
      "Key": "lastname",
      "Value": "User"
    },
    {
      "Key": "fullname",
      "Value": "Sample User"
    },
    ...
  ]
}

If we look at the HTTP response header when retrieving the file, we see that the content type and metadata properties are present:

HTTP/1.1 200 OK
Content-Type: application/json
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-version: 2015-12-11
x-ms-meta-userid: {d617a1a0-359a-e411-9407-00155d0ae259}
x-ms-meta-userfullname: Lorem Ipsum
x-ms-meta-deletiondate: 2016-12-08T14:23:25.5635361Z
x-ms-blob-type: BlockBlob

Concluding

By utilizing the Sandboxable Azure SDK, we only needed a few lines of code to store deleted CRM records in Azure blob storage, making a remote archive a piece of cake.

When using blob storage for archiving you might want to take a look at which might save you same money.

Sample code

The complete source code is available as a sample project.
Expect more samples in the Sandboxable-Samples repository on GitHub in the future.

/

A while back I've introduced Sandboxable. It's a means to use NuGet packages that normally are not available for code that runs with Partial Trust.
In this post, we will walk through the steps to create a Microsoft Dynamics CRM plug-in that will add a message to an Azure queue.

A while back I’ve introduced Sandboxable. It’s a means to use NuGet packages that normally are not available for code that runs with Partial Trust.

In this post, we will walk through the steps to create a Microsoft Dynamics CRM plug-in that will add a message to an Azure queue.

At the end of the post you will find the links to the complete source code for you to use.

Setting up the project

  1. Create a new Class Library project in Visual Studio
  2. Add the following NuGet packages with their dependencies:
    • Microsoft.CrmSdk.CoreAssemblies
      This package will add the base to create a plug-in for CRM
    • MSBuild.ILMerge.Task
      This package makes sure that the generated assembly will also contain all dependencies.
      More information about this package can be found on the ILMerge MSBuild task NuGet package CodePlex page
    • Sandboxable.Microsoft.WindowsAzure.Storage
      This package provides the Azure storage SDK, modified to run in the sandbox.
  3. Change the Copy Local property for the CRM references to false. These assemblies are already present in the runtime hosting the sandbox, so they can be kept outside our assembly
  4. Enable strong name key signing on your project

Now you can do a test build of the project to check if everything works correctly.

Writing the plug-in

I’ve based the plug-in code on the MSDN article Write a plug-in.

Getting the connection details

To connect to an Azure queue, you need 3 details

  1. The storage account name
  2. One of the storage account access keys
  3. The name of the queue

There are several ways to get these details at runtime. To name a few: hard-coded, stored as data in an entity, stored in a web resource as a XML file or in the plug-in step configuration.
For this sample we’ll use a JSON string stored in the secure storage property of the plug-in step.
To deserialize these settings we use JsonConvert with a nested PluginSettings class.

PluginSettings pluginSettings =
	JsonConvert.DeserializeObject<PluginSettings>(this.secureString);

Initializing the CloudQueueClient

The CloudQueueClient class offers an easy way to manage and use Azure queues.
To initialize this class, we need to provide the URL and the StorageCredentials.

StorageCredentials storageCredentials = 
	new StorageCredentials(pluginSettings.AccountName, pluginSettings.Key);

Uri baseUri = new Uri($"https://{pluginSettings.AccountName}.queue.core.windows.net");

CloudQueueClient queueClient = new CloudQueueClient(baseUri, storageCredentials);

Creating a reference to the queue

With the queue client, we can create a reference to the CloudQueue with the name that is stored in the constant named QueueName.
To make sure the queue exists, we call the CreateIfNotExists method which ensures us if there isn’t a queue present yet, it’ll be created for us at that moment.

CloudQueue queue = queueClient.GetQueueReference(QueueName);

queue.CreateIfNotExists();

Adding the message to the queue

We create some message data, using the context of the current plug-in execution. This data is wrapped in a CloudQueueMessage.
We add the message to the queue, using the AddMessage method and we’re done!

var messageData = new
{
	context.UserId,
	context.MessageName,
	entity.LogicalName,
	entity.Id,
	entity.Attributes
};

CloudQueueMessage queueMessage =
	new CloudQueueMessage(JsonConvert.SerializeObject(messageData));

queue.AddMessage(queueMessage);

We must build our project again so we can proceed.

Register the plug-in assembly

Now the freshly baked assembly needs to be registered on the server.
The steps to do this are outside the scope for this post but more information can be found in the walkthrough: Register a plug-in using the plug-in registration tool.

Register the plug-in step for an event

To test the plug-in, we’ll register it on the creation event of the contact entity.
For performance optimization we’ll choose the asynchronous execution method. External resources should never be part of your synchronous pipeline.

In the Secure Configuration property, we set the value with the JSON object containing the connection information:

{
	"AccountName":"loremipsum",
	"Key":"DDWLOREM...IPSUMr0A=="
}

(obviously, these values do not represent real data)

Register the plug-in step for the creation event of contact entities

Testing the plug-in

We create a new contact in CRM called Sample User.
After a couple of seconds we see the following message appear on the queue:

{
	"UserId":"d617a1a0-359a-e411-9407-00155d0ae259",
	"MessageName":"Create",
	"LogicalName":"contact",
	"Id":"6e843a34-91b1-e611-80e4-00155d0a0b40",
	"Attributes":[
		{
			"Key":"firstname",
			"Value":"Sample"
		},
		{
			"Key":"lastname",
			"Value":"User"
		},
		{
			"Key":"fullname",
			"Value":"Sample User"
		},
		...
	]
}

(formatted for readability)

Concluding

By utilizing the Azure SDK, we only needed a few lines of code to send messages to an Azure queue and making all sorts of integration with other systems possible.
By using the Sandboxable project we’re no longer limited by the sandbox.

Sample code

The complete source code is available as sample project.
Expect more samples in the Sandboxable-Samples repository on GitHubin the future.

/

I would like to introduce to you Winvision’s first open source project: Sandboxable.

Sandboxable enables your project to utilize functionality provided by other (Microsoft) libraries that normally are not available in a Partial Trust environment like the Microsoft Dynamics CRM sandbox process.
The project offers modified NuGet packages that will run with Partial Trust.

I would like to introduce to you ’s first open source project: .

Sandboxable enables your project to utilize functionality provided by other (Microsoft) libraries that normally are not able to use in a Partial Trust environment like the Microsoft Dynamics CRM sandbox process.
The project offers modified NuGet packages that will run with Partial Trust.

Sandboxing

Sandboxing is the practice of running code in a restricted security environment, which limits the access permissions granted to the code. For example, if you have a managed library from a source you do not completely trust, you should not run it as fully trusted. Instead, you should place the code in a sandbox that limits its permissions to those that you expect it to need.

You can read more on this in the article
If you encounter a .NET sandbox today chances are it’s running with

A big example of software running in a sandbox are the Microsoft Dynamics CRM (Online) Plug-ins and custom workflow activities. ()

The problem

As developers we use a lot of library code like NuGet packages as we’re not trying to reinvent the wheel. The downside is that most of these libraries are not written with a Partial Trust environment in mind.
When we embed these libraries to our code in the sandbox we encounter 2 common issues:

  1. The code contains security critical code and will fail to load with a TypeLoadException or will throw an SecurityException at runtime
  2. The package references another package that contains security critical code and even though the code might not even be used it will trigger one of the exceptions mentioned above

Problematic constructs

  • Calling native code

    [DllImport("advapi32.dll", SetLastError = true)]
    [return: MarshalAs(UnmanagedType.Bool)]
    internal static extern bool CryptDestroyHash(IntPtr hashHandle);
  • Override SecurityCritical properties of an object like Exception

    public override void GetObjectData(SerializationInfo info, StreamingContext context) {
        ...
    }

    Where Exception has the following attributes on this method

    [System.Security.SecurityCritical]
    public virtual void GetObjectData(SerializationInfo info, StreamingContext context)
    {
        ...
    }
  • Serialize non-public classes, fields or properties

    [JsonProperty(DefaultValueHandling = DefaultValueHandling.Ignore, NullValueHandling = NullValueHandling.Ignore, PropertyName = PropertyNotBefore, Required = Required.Default)]
    private long? _notBeforeUnixTime { get; set; }

The solution

When we encounter a NuGet package that fails to load or execute in the sandbox and it’s source is available we make a Sandboxable copy of it.
This is done by eliminating the offending code in a way that is the least obtrusive and publish this version to NuGet.

The base rules are:

  • Keep the code changes as small as possible
  • Prefix all namespaces with Sandboxable
  • Eliminate offending NuGet dependencies
  • If a new dependency is needed, it will be on a sandbox friendly NuGet package

Source and contribution

The source is published at the Sandboxable project at GitHub.

Included in the solution is also a stand-alone project to test if code will break inside a sandbox. This makes testing libraries easier without the need to deploy it to a (remote) environment.

I like to invite everybody to use the Sandboxable NuGet packages and contribute to the project.

/

This week I’ll be attending the Microsoft Build 2016 conference in San Francisco.
Lots of news to be expected for developers covering the many technologies Microsoft is putting on the market.

This week I’ll be attending the Microsoft Build 2016 conference in San Francisco.
Lots of news to be expected for developers covering the many technologies Microsoft is putting on the market.

Keynotes

Traditionally there are 2 keynotes at the Build conference. The first one is focusing on Microsoft Windows. The second one is more focused on Microsoft Azure. I expect the same pattern this year.
Based on the news and rumors the last couple of weeks, combined with the scheduled sessions, I expect the following topics to be covered during the keynotess.

  • Windows 10
    Obviously. Redstone is coming and Edge is getting add-ins so those are 2 obvious topics. I expect an overview of all the new things that are already part of the current fast ring previews.
  • Xamarin
    Microsoft has recently bought the company. A move I already expected 2 years ago. And looking around San Francisco they’re running quite the marketing campaign. So naturally this needs to be in the keynote. I hope they’ll be answering the question how the licensing will be affected.
  • Xbox
    We are expecting universal apps on the Xbox for several years. But this year should be it. A tweet by Scott Hanselman indicates that Phil Spencer will be part of this years conference.
  • Surface Hub
    I don’t expect a new device, only some demo’s. But as the devices are now finally available for purchase they probably want to do some marketing around these costly beasts. Also a couple of sessions in the program are mentioning developing apps for this device.
  • Hololens
    The shipping of the first wave of devices is at the same time as the conference, that can’t be accidental. Lots of sessions are covering different aspects of the HoloLens. So a new demo during the keynote can’t be far away. Hopefully the device can measure the distance between the eyes automatically. Last year this was a manual task.
  • Visual Studio vNext
    A bit of news got around about the next version of Visual Studio and the improved installer experience. This would be the time to share this news officially.
  • .NET Core
    It’s about time the .NET core is officially released as it’s been in preview for a long time. So I expect the 1.0 RTM version to be pushed to the world today.
  • Azure
    Azure is a big platform, and I expect a couple of new features will be released to the public. Maybe even some Azure Stack integration will be demoed.
  • Office Graph
    Already announced in preview last year. I expect a full release this year. Maybe adding new features the API in previews.

My journey through the week

On my twitter feed I’ll be posting all the sessions I’m attending. My focus will probably be on UWP and Azure related sessions.

Want to meet me or do you have a question about the sessions I’ve attended? Just send me a message.

Watching sessions

Not in San Francisco and still want to be part of the action?
Channel9 will be covering a lot of content live including the Keynotes and interviews. Also all the sessions will be available later.

Go to the Channel9 Build 2016 website.

/

Recently I got certified by Microsoft as Solutions Developer for the Windows Universal Platform by taking two exams that are currently in beta. Because the exams are in beta there is not much guidance to be found online. I noticed during the exams I was being tested on skills not mentioned on the Microsoft Learning web site.

In this post I’ll cover these differences and how I prepared for the exams so it’ll be easier for you to get certified.

MCSD Universal Windows Platform

Recently I got certified by Microsoft as Solutions Developer for the Windows Universal Platform by taking two exams that are currently in beta. Because the exams are in beta there is not much guidance to be found online. I noticed during the exams I was being tested on skills not mentioned on the Microsoft Learning web site.
In this post I’ll cover these differences and how I prepared for the exams so it’ll be easier for you to get certified.

Disclaimer

Microsoft is constantly changing the exams, so my experience can differ from yours. As both UWP exams were in beta, the exams I took might not represent the exams in the future.
Also, I won’t go into detail about the actual questions in the exam. This is prohibited by the NDA we all sign at the start of an exam.

MCSD: Universal Windows Platform

According to with this certification you:

Demonstrate your expertise at planning the designing and implementing of Universal Windows Platform apps that offer a compelling user experience, leverage other services and devices, and use best coding practices to enhance maintainability.

The certification covers a total of three exams. One exam that has been around for a couple of years and the two beta exams I mentioned earlier.

Exam 70-483: Programming in C#

Microsoft Specialist Programming in C-Sharp

Passing this exam will give you the Microsoft Specialist certification.

This certification will count towards other MCSA and MCSD certification:

  • Microsoft Certified Solutions Associate: SQL Server 2012
  • Microsoft Certified Solutions Developer: SharePoint Applications
  • Microsoft Certified Solutions Developer: Web Applications
  • Microsoft Certified Solutions Developer: Windows Store Apps Using C#

As I passed this exam back in december 2013 I can’t offer you any actual insights into additional measured skills. So I’ll only give you the link to the skills measured for exam 70-483.

Exam 70-354: Universal Windows Platform – App Architecture and UX/UI

This exam validates a candidate’s knowledge and skills for planning the development of Universal Windows Platform apps and designing and implementing a compelling user experience.

This exam is quite broad. As it covers everything from designing the app to the application lifecycle management of your app.
The skills measured for exam 70-354 are listed on the website.

Additional skills that can be tested:

  • Choose between version control systems. For example Team Foundation Server, Visual Studio Team Services and GitHub
  • Implement optimistic concurrency in your data layer
  • Enable beta testing of your app
  • Publish the app to the store

Exam 70-355: Universal Windows Platform – App Data, Services, and Coding Patterns

This exam validates a candidate’s knowledge and skills for implementing apps that leverage other services and devices and that use best coding practices to enhance maintainability.

This exam is more limited to the developer role. It covers everything related to developing code, but not limited only to application development.
The skills measured for exam 70-355 are listed on the website.

Additional skills that can be tested:

  • Execute code reviews

Preparation

To prepare for the exam I used the free online training provided at the Microsoft Virtual Academy.
The courses I followed were:

Conclusion

What I found surprising was that a lot of questions were not about UWP app development itself but focusing on the surrounding challenges and technologies. Like:

  • Working in a team
  • Sharing and reviewing code
  • Using back-end services like Azure
  • Using and connecting with technologies not owned by Microsoft like GitHub, SQLite and MongoDB

So passing the exams will show that you are not only able to write an app. But also that you can do with a team, an appropriate lifecycle and utilizing external data sources.

Beta bonus: Charter member

If you do the exams while they are beta you will find a bonus notation in the certification title on your transcript:
MCSD Charter notation
This is explained at the end of the transcript.

*Charter- Certification that was achieved within six months following the retail release date of the certification. Charter Members are recognized by being given the Charter version of the certificate acknowledging their early adoption of the technology solution.

Filed under C#, Windows
Last update: