Skip to content
/

Recently I was challenged with the task to set the layout and content of a wiki page when a new page is added to a team site. As I'm used to work with SharePoint publishing the task sounded easy, but I was wrong.

Recently I was challenged with the task to set the layout and content of a wiki page when a new page is added to a team site. As I’m used to work with SharePoint publishing the task sounded easy, but I was wrong.

Text Layout

Image showing the Text Layout option in the SharePoint ribbon

My first path was to figure out where SharePoint puts the wiki “text layouts”. I discovered this isn’t how it works. The layouts available for wiki’s are not configurable anywhere.

But using some PowerShell it was easy to get and set the layout as it’s the HTML content of the “Wiki Field” column in the list.

$web = Get-SPWeb http://server/teamsite
$list = $web.Lists["Site Pages"]
$listItem = $list.Items[2] # My test page
$listItem["Wiki Content"] # Returns HTML

The HTML content consists of 2 parts. The layout table and the layout data.

<table id="layoutsTable" style="width: 100%">
    <tbody>
        <tr style="vertical-align: top">
            <td style="width: 100%">
                <div class="ms-rte-layoutszone-outer" style="width: 100%">
                    <div class="ms-rte-layoutszone-inner">
                    </div>
                </div>
            </td>
        </tr>
    </tbody>
</table>
<span id="layoutsData" style="display: none">false,false,1</span>

The layout data describes visibility of the header and footer and the number of columns.

Event receiver

To set the content the first thing in my mind was to add an , associated with ListTemplateId 119 (WebPageLibrary).

I deployed the solution and added a page and tadah: no content!

Using the debugger to verify my event receiver was triggered, I went to the next option: adding an . This time I got an exception the page was already modified by another user. Refreshing the page gave me the default content. So this told me 2 things:

  1. It’s possible to set default content
  2. I forgot to set the Synchronize property

So fixing the second thing I deployed once again and got: no content!

As I used in my receiver I got a version history where it showed the content was set, but the final version still ended up empty.

When faced with utter desperation, working with SharePoint has taught me you always have an escape: launch Reflector.

There I found this gem of code in the SubmitBtn_Click method of the CreateWebPage Class:

SPFile file = SPUtility.CreateNewWikiPage(wikiList, serverRelativeUrl);
SPListItem item = file.Item;
item["WikiField"] = "";
item.UpdateOverwriteVersion();

So no matter what I do in ItemAdding or ItemAdded, the content always ends up empty!

After this discovery, the fix was removing the code from the ItemAdding and ItemAdded events and moving it to the event (synchronious) and added a check if the WikiField content is an empty string.

public override void ItemUpdated(SPItemEventProperties properties)
{
    base.ItemUpdated(properties);

    var listItem = properties.ListItem;

    if (!string.IsNullOrEmpty(listItem["WikiField"] as string))
    {
        return;
    }

    this.EventFiringEnabled = false;

    listItem["WikiField"] = html;
    listItem.UpdateOverwriteVersion();

    this.EventFiringEnabled = true;
}

Now every wiki page I add to the team site contains the correct text layout and contains the default HTML.

/

I’ve been running my own mail server at home for years. But it requires a reliable connection and some maintenance once in a while. And of course it always breaks when I‘m on the other side of the world.
To free myself of that burden I decided to make the move to Office 365. However I discovered there is no way to set my account as a catch-all account. This is not possible at all!

So I made my own scripts to add all email addresses I used in the past as an alias on my mailbox.

I have been running my own mail server at home for years using Postfix, dovecot, amavisd-new, ClamAV and SpamAssassin. But it requires a reliable connection and some maintenance once in a while. And of course, it always breaks when I am on the other side of the world.

To free myself of that burden, I decided to make the move to Office 365. I got myself a P1 subscription and started to browse through the configuration screens. The migration of an account from IMAP to Exchange Online was very fast and easy.

Happy with how everything looked, felt and connected, I was ready to make the switch.

Just before I wanted to change the MX record to point to Office 365, I double checked the configuration of my account. I discovered I couldn’t find any way to set my account as a catch-all account. After some research I found out this is not possible at all!

Catch-all Mailbox

A catch-all mailbox receives messages sent to email addresses in a domain that do not exist. Exchange Online anti-spam filters use recipient filtering to reject messages sent to mailboxes that don’t exist, so catch-all mailboxes are not supported.

That left me 2 options:

  1. Stop the migration to Office 365, and leave things as they were.
  2. Make every email address I used in the past an alias.

I started searching if anyone has done this before. It looks like this is not the case, so seeing this as a challenge, I started working on my own solution.

read more...
/

Recently I worked on an HttpHandler implementation that is serving images from a backend system. Although everything seemed to work as expected it was discovered images were requested by the browser on every page refresh instead of caching the browser them locally. Together with my colleague Bert-Jan I investigated and solved the problem which will be explained in this post.

Recently I worked on an that is serving images from a backend system. Although everything seemed to work as expected it was discovered images were requested by the browser on every page refresh instead of the browser caching them locally.
Together with my colleague I investigated and solved the problem which will be explained in this post.

The problem

Let’s start with the original (simplified) code. This code gets the image from the backend system (in this case Content Management Server 2002) and serves it to the browser, or in case the resource is not available it will return .

public class ResourceHandler : IHttpHandler
{
  public void ProcessRequest(HttpContext context)
  {
    context.Response.Cache.SetCacheability(HttpCacheability.Public);
    context.Response.Cache.SetMaxAge(new TimeSpan(1, 0, 0));

    string imagePath = "some path";

    Resource resource = CmsHttpContext.Current.RootResourceGallery
                                    .GetByRelativePath(imagePath) as Resource;

    if (resource == null)
    {
      // Resource not found
      context.Response.StatusCode = 404;
      return;
    }

    using (Stream stream = resource.OpenReadStream())
    {
      byte[] buffer = new byte[32];
      while (stream.Read(buffer, 0, 32) > 0)
      {
        context.Response.BinaryWrite(buffer);
      }
    }
  }

  public bool IsReusable
  {
    get { return true; }
  }
}

Every time the browser requested a resource it responded with the following headers and included the full image.

HTTP/1.1 200 OK
Cache-Control: public
Content-Length: 3488
Content-Type: image/gif
Server: Microsoft-IIS/6.0
X-AspNet-Version: 2.0.50727
COMMERCE-SERVER-SOFTWARE: Microsoft Commerce Server, Enterprise Edition
X-Powered-By: ASP.NET
Date: Fri, 11 Mar 2011 10:51:08 GMT

GIF89a... (the raw image)

The cause

For some reason the local browser cache was omitted. We fired up Fiddler and started comparing the headers to an other source where an image was getting cached locally.

On the first request we discovered an additional header Last Modified:

Last-Modified: Tue, 05 Jun 2007 15:19:48 GMT

The second response we got to the same image resulted not in a "200 OK" but a "304 Not Modified" message.

HTTP/1.1 304 Not Modified
Connection: close
Date: Fri, 11 Mar 2011 12:21:55 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Cache-Control: public
Last-Modified: Tue, 05 Jun 2007 15:19:48 GMT

The solution

So the first thing missing was the Last Modified entry in our first response. We added code to include this property.

context.Response.Cache.SetLastModified(resource.LastModifiedDate);

By adding the Last Modified date the browser added a new entry to the second request of the image:

If-Modified-Since: Tue, 05 Jun 2007 15:19:48 GMT

But the response was still the same 200 OK with the complete image. As it turned out you need to handle the If-Modified-Since yourself. We added the following code to handle this.

string rawIfModifiedSince = context.Request.Headers.Get("If-Modified-Since");
if (string.IsNullOrEmpty(rawIfModifiedSince))
{
  // Set Last Modified time
  context.Response.Cache.SetLastModified(res.LastModifiedDate);
}
else
{
  DateTime ifModifiedSince = DateTime.Parse(rawIfModifiedSince);

  if (resource.LastModifiedDate == ifModifiedSince)
  {
    // The requested file has not changed
    context.Response.StatusCode = 304;
    return;
  }
}

After testing this again the image was still transmitted every time it was requested. A quick debug of the date compare revealed that the HTTP request date time does not contain milliseconds. The following fix was applied.

if (resource.LastModifiedDate.AddMilliseconds(
                          -resource.LastModifiedDate.Millisecond) == ifModifiedSince)

Now every following request returned a 304 Not Modified and saves us a lot of traffic and loading time!

Summary

To conclude this post I give you the complete code:

using System;

public class ResourceHandler : IHttpHandler
{
  public void ProcessRequest(HttpContext context)
  {
    context.Response.Cache.SetCacheability(HttpCacheability.Public);
    context.Response.Cache.SetMaxAge(new TimeSpan(1, 0, 0));

    string imageName = "some path"

    Resource resource = CmsHttpContext.Current.RootResourceGallery
                                          .GetByRelativePath(imageName) as Resource;

    if (resource == null)
    {
      // Resource not found
      context.Response.StatusCode = 404;
      return;
    }

    string rawIfModifiedSince = context.Request.Headers.Get("If-Modified-Since");
    if (string.IsNullOrEmpty(rawIfModifiedSince))
    {
      // Set Last Modified time
      context.Response.Cache.SetLastModified(resource.LastModifiedDate);
    }
    else
    {
      DateTime ifModifiedSince = DateTime.Parse(rawIfModifiedSince);

      // HTTP does not provide milliseconds, so remove it from the comparison
      if (resource.LastModifiedDate.AddMilliseconds(
                         -resource.LastModifiedDate.Millisecond) == ifModifiedSince)
      {
          // The requested file has not changed
          context.Response.StatusCode = 304;
          return;
      }
    }

    using (Stream stream = resource.OpenReadStream())
    {
      byte[] buffer = new byte[32];
      while (stream.Read(buffer, 0, 32) > 0)
      {
          context.Response.BinaryWrite(buffer);
      }
    }
  }

  public bool IsReusable
  {
    get { return true; }
  }
}
Filed under C#
Last update:
/

With SharePoint it’s easy to configure multiple zones for your SharePoint Web Application. For example you have a Publishing Web Site with two zones.
After the content is published it’ll also be available on the anonymous site and most of the URLs will be automatically translated to corresponding zone URL.
There are however some places this is not the case.

With SharePoint it’s easy to configure multiple zones for your SharePoint Web Application. For example you have a Publishing Web Site with two zones.

  1. The authenticated CMS where editors can manage content: https://cms.int
  2. The anonymous website where everybody can view the content: http://www.ext

When the editors link to sites, pages, documents and images the URL will start with https://cms.int. After the content is published it’ll also be available on the anonymous site. Now most of the URLs will be automatically translated to corresponding zone URL and start with http://www.ext.

There are however some place this is not the case. You could try to use relative URLs but even that won’t fix every scenario.

Translate the URL using code

Facing this issue I had to translate the URLs myself. But I want to write minimal code. Lucky Microsoft has done most of the work for me.

On the you will find the . This “collection” is actually an instance of the and provides the .
And this is where the magic happens.

This method has an overload where you supply a Uri and a SPUrlZone. You can provide one of the values of the or you can provide the current zone.

To get your current zone you can use the static of the . This method requires a Uri so we provide the current one using the from the same class.

To wrap it all up I give you the code:

var originalUri = new Uri("https://cms.int/pages/default.aspx");

var zone = SPAlternateUrl.Lookup(SPAlternateUrl.ContextUri).UrlZone;

var translateUri = SPFarm.Local.AlternateUrlCollections
                                    .RebaseUriWithAlternateUri(originalUri, zone));

// When accessing from the authenticated zone
// translateUri == "https://cms.int/pages/default.aspx"

// When accessing from the anonymous zone
// translateUri == http://www.ext/pages/default.aspx

“Other” URLs

If you pass a URL which is not listed as an Alternate Access Mapping the method will return the original URL.

/

In a previous post I have written about Using the people picker over a one-way trust. In this post I use STSADM commands as there are no other ways to configure this. A downside of the STSADM command is your domain password being visible on the command prompt in clear text for everybody to read, or to retrieve from the command line history.

SharePoint 2010 introduces several cmdlets to replace the “old” STSADM commands. Microsoft has posted an overview of the STSADM to Windows PowerShell mapping. However the commands for configuring the people picker are not available.

In a previous post I have written about . In that post I use STSADM commands as there are no other ways to configure this. A downside of the STSADM command is your domain password being visible on the command prompt in plain text for everybody to read.

With SharePoint 2010 Microsoft introduces several cmdlets to replace the “old” STSADM commands. But looking at the you will see the commands for configuring the people picker are not present.

Creating my own script

PowerShell contains the which uses a dialog to request credentials from the user and stores the password in a . This triggered me to write a PowerShell script which will work the same as STSADM -o setproperty -pn peoplepicker-searchadforests, but instead of typing the credentials on the command line it will use the credential dialog for every trusted domain.

As written in my previous post the configuration is done in two steps.

SetAppPassword

First you need to create a secure store for the credentials. This is done by executing the SetAppPassword command on every server in your SharePoint Farm with the same password.

STSADM

stsadm -o setapppassword -password <password>

PowerShell

Set-AppPassword "<password>"
function Set-AppPassword([String]$password) {
  $type = [Microsoft.SharePoint.Utilities.SPPropertyBag].Assembly
                           .GetType("Microsoft.SharePoint.Utilities.SPSecureString")
  $method = $type.GetMethod("FromString", "Static, NonPublic", $null,
                                                                 @([String]), $null)
  $secureString = $method.Invoke($null, @($password))
  [Microsoft.SharePoint.SPSecurity]::SetApplicationCredentialKey($secureString)
}

PeoplePickerSearchADForests

The second step is to register the (trusted) domains to be visible in the people picker. Remember this setting is per web application and zone.

STSADM

stsadm -o setproperty -url <url> -pn "peoplepicker-searchadforests" -pv
  "forest:<source forest>;domain:<trusted domain>,<trusted domain>\<account>,<password>"

PowerShell

Set-PeoplePickerSearchADForests "<url>" "forest:<source forest>;domain:<trusted domain>"
function Set-PeoplePickerSearchADForests([String]$webApplicationUrl,
                                         [String]$value) {
  $webApplication = Get-SPWebApplication $webApplicationUrl

  $searchActiveDirectoryDomains = $webApplication.PeoplePickerSettings
                                                      .SearchActiveDirectoryDomains
  $searchActiveDirectoryDomains.Clear()

  $currentDomain = (Get-WmiObject -Class Win32_ComputerSystem).Domain

  if (![String]::IsNullOrEmpty($value)) {
    $value.Split(@(';'), "RemoveEmptyEntries") | ForEach {
        $strArray = $_.Split(@(';'))

        $item = New-Object Microsoft.SharePoint.Administration
                                         .SPPeoplePickerSearchActiveDirectoryDomain

        [String]$value = $strArray[0]

        $index = $value.IndexOf(':');
        if ($index -ge 0) {
            $item.DomainName = $value.Substring($index + 1);
        } else {
            $item.DomainName = $value;
        }

        if ([System.Globalization.CultureInfo]::InvariantCulture.CompareInfo
                                       .IsPrefix($value, "domain:","IgnoreCase")) {
            $item.IsForest = $false;
        } else {
            $item.IsForest = $true;
        }

        if ($item.DomainName -ne $currentDomain) {
            $credentials = $host.ui.PromptForCredential("Foreign domain trust"
              + " credentials", "Please enter the trust credentials to connect to"
              + " the " + $item.DomainName + " domain", "", "")

            $item.LoginName = $credentials.UserName;
            $item.SetPassword($credentials.Password);
        }

        $searchActiveDirectoryDomains.Add($item);
    }

    $webApplication.Update()
  }
}

Using the script

I have attached the script so you can use it in any way you want. You can put the commands in you own .ps1 file, or load the script in your current session using the following syntax:

. .\<path to file>PeoplePickerSearchADForests.ps1

(yes, that’s a dot, then a space, then the path to the script)

PeoplePickerSearchADForests.zip