Dynamically Assigning Tasks to SharePoint 2013 Users

If you’re using the web designer for a K2 for SharePoint 2013 workflow and you assign a task to a user that someone selects from a people picker then you’ll notice it works pretty seamlessly. However, try do the same thing in K2 Studio and it fails miserably. This is because the value that’s stored for the selected task owner is the SharePoint ID, so if you do assign a task to that user it will be assigned to K2:MyDomain\167

Luckily there’s a way past this, but it’s not obvious. Thanks to Igor from K2 for helping me find this…

Step 1: Create a new SmartObject.

There’s a VERY useful service object which K2 include with the product but by default there’s no SmartObject available over it. We’re going to fix that by creating a new SmartObject for it. Run the SmartObject Testing Tool (in the bin folder) and look for this Service Object:

1 - Service Object

Once you’ve clicked “Create SmartObject” you can change the category (I’ll leave it as Default for now) and publish it:

2 - Published

Step 2: Test the new SmartObject

Open the category you published the SmartObject to and execute it. You’ll see pages of new methods available to you. Look for the one called “Get K2 FQN for SharePoint users” and fill in the details you want. The Site URL will be your SharePoint site, the K2 Label will always be “K2”, and in the User ids field you can enter in a semicolon-delimited list of SharePoint IDs. Run the method and AD users pop out the other end:

3 - Execute

That’s it. Hope that helps someone!


Completing tasks with Exchange

We recently came across an interesting issue where the user would complete an action with Exchange integration (replying to the email with “Approve”) and even though it worked they would always get a second email with:

“The K2 server could not find the worklist item 479_54. This item may have been actioned by another user.
The full error from the K2 server is ‘The worklist item 479_54 is no longer available or you do not have rights to open it.’.”

It took a while to get to the bottom of it, but when you realise what’s going on it’s actually simple to resolve.

What happens is that the K2 service account will be checking for emails on the account, and then when it gets the email it will parse it and action the task. In our case the problem was that the UAT K2 server was using the same K2 service account as the production server. This means that the exchange account was being checked by the production K2 service as well as the UAT K2 service. The production server happily completed the task, but of course that task doesn’t exist on the UAT environment so the second ‘fail’ email came form the UAT server, not the production server.

To resolve this issue make sure you aren’t using the same service account on more than one environment on the same domain. Of course it’s best practice to use different accounts anyway, but here’s one more reason why…


Posted by on September 18, 2015 in K2 Workflows


Some users complete tasks, but it ain’t working…

Had a very interesting issue with a client’s environment recently. There was one user (let’s call her Jane) who couldn’t complete tasks. They would appear in her task list and she could action them without any errors, but it just looked like nothing was happening and the task would just sit there until someone else completed the task.

It turns out the error was in the database, but it’s tricky to see the problem. I ran the following query:

FROM [K2].[Server].[Actioner]
where ActionerName like '%Jane%'

The results looked fine, but exporting the results to csv and opening them in Notepad revealed the problem. I expected this:


But what I got was this:


Notice the ‘_’s? That’s actually a massive amount of whitespace after the user’s name. That’s what caused the issue. I trimmed the username and suddenly the problem went away.

So what caused it? Here’s a response from K2 support (who were very helpful as always):

Hi Trent

Values are written to the Actioners table based on the Destination Users set in a WF . Every User/Group/Role or value ever used as a Destination will show up in this table . Effectively , any user that get’s authenticated by the server …

The problem comes in at the point where Destinations are assigned . In cases where Destinations are assigned dynamically via DataFields who’s values are derived from outside sources (Drop Downs , User Input and so on) … If a value was copy/pasted into a Input Field that will then be used as the Destination , then this value will end up in the Actioners Table as is (UTF characters and all) .

In most cases this is why these spaces are introduced … Erroneous User Input which eventually filters down to the DB level but is pretty much invisible from any UI elements due to SQL and most of the Web Interface’s Auto-Trimming actions .

In very rare cases I have found that the spaces come directly from AD , where the AD Object was created using PowerShell and the inputs for the User Details copy/pasted from some sort of Rich Text Document containing UTF characters … However in your case the former appears more likely . Check where the Destinations User’s for your Workflows are derived from and ensure that there are no occurrences where Users can copy/paste values for Destinations … or if that is unavoidable , make sure to TRIM values before submitting to the K2 Server .

Leave a comment

Posted by on June 25, 2015 in K2 Workflows


Automatic process version upgrade

First of all… there are terms and conditions. This won’t work in all cases, and for all intents and purposes it’s a bit of an ugly hack, but there are times when something like this is needed (you have loads of live processes which need a new data field urgently, or a new action needs to be added to an activity to bypass a bug and there are live instances which are not yet in error state, etc).

So here’s one way to do this:

  1. Store a “Version” data field in the workflow and keep the latest version label in the .Net app’s app.config file
  2. Catch all exceptions for setting/reading data fields and executing actions
  3. If you get an exception then get the Version data field from the workflow. If it doesn’t exist, upgrade the workflow. If it’s different to the one in the config file, upgrade the workflow.

Upgrading the workflow:

  1. Use the “Process Data” SmartObject to read all the process level data fields from the old process and create a Dictionary out of it.
  2. Create a new instance of the workflow and send it to a holding activity (just a code event with Sync = False). Use the dictionary to set all the data fields so that they are copied from the old process to the new process.
  3. Use the “Activity Instance” Smartobject to get a full list of all activity names for the old workflow and then use a GOTO to go to the most recent Activity.

Obviously there are loads of cases where this won’t work (for example if a later version deleted an activity or data field) but in our case I’m surprised at how well it works. There are also ways of migrating the workflow history over, but that’s a bit more complicated and probably the subject of another post…

Leave a comment

Posted by on July 2, 2013 in SmartObjects


Data Fields and Workflow Versions

Here’s our scenario… We have live workflows deployed and a fairly complex ASP.Net app as a front end. In our latest deployment we have added a few process level data fields and now the application expects to be able to read and write to those data fields. However, existing live processes won’t have those data fields so we will be looking at getting exceptions when we try access them. So what to do?

I figured there would be 2 ways of dealing with this sort of versioning. The first option is to have the process version stored in a data field which we could read and then the value of this version would tell us whether or not we can expect the data field to exist. This is fine in theory, but in practice I see there being problems further down the line, plus it means 2 reads instead of 1.

The option I decided on is to create a WorkflowDataField class and return this instead of just a string. This WorkflowDataField would not only contain the value of the data field but it would also have a result type so the calling code could check if there were errors.

Here’s my WorkflowDataField class:

    public enum ResultType

    public class WorkflowDataField
        private string m_value;

        public ResultType QueryResultType { get; set; }

        public string Value
            get { return m_value; }
            set { m_value = value; }

        public int IntValue
            get { return Convert.ToInt32(m_value); }
            set { m_value = value.ToString(); }

        // etc...

        public override string ToString()
            return Value;

Pretty simple, nothing fancy. Now when I query the data field I do something like this:

var workflowDataField = new WorkflowDataField();
    workflowDataField.Value = processInstance.DataFields[key].Value.ToString();
    workflowDataField.QueryResultType = ResultType.Success;
catch (Exception e)
    workflowDataField.QueryResultType = ResultType.DataFieldDoesNotExist;
    workflowDataField.Value = e.Message;
return workflowDataField;

Unfortunately the exception thrown is nothing more specific than System.Exception so if you want to be sure that the exception is being thrown because of a missing data field then you have to parse the exception message for ‘does not exist’.

I am a firm believer in the philosophy of not using exceptions to control program flow, but in this case I made an exception (sorry for the pun). The reasons I’m allowing myself to do this are:
1. It’s not a system exception, it’s expected.
2. Exceptions are slow and expensive, but we’d only be catching exceptions while the old instances are alive. As time goes on we’d get fewer and fewer, so in the long run it would be better performance than doing 2 reads each time.

Hope that helps someone…

Leave a comment

Posted by on July 5, 2012 in K2 API, K2 Workflows


Management Tab Missing in Workspace

I recently ran into a problem after an upgrade to 1420 whereby the Management tab was missing from the workspace. I did all the regular checks (license, permissions etc) and no luck. If you ever run into this situation, back up your K2Workspace database and then truncate the ActionPermissions table. This resolved it for me.

Thanks to Tyler Clark from K2 for helping me out with this one.


Posted by on January 31, 2012 in SmartObjects


K2 and Entity Framework

I’m taking a quick break from WF4 to blog about some fun we’re having with Entity Framework and K2. I’ll get back to WF4 straight away…

Our situation is as follows: Our workflow directly references a class library (our application’s business layer) and the class library uses Entity Framework to access the application database. The first event in our workflow is a code event which loads up some config entries from this class library (which in turns gets them from our database) and then assigns a task to the user. We’re starting our workflows through the API and we’re doing it synchronously, which means the method doesn’t return until the K2 process reaches a stable state like a client event.

Here’s the issue – the first call to start a process instance takes 35 seconds. After that they are all quick, but after a bit of inactivity it takes 35 seconds again.

To understand what’s happening you need to understand the inner workings of Entity Framework, K2 and AppDomains. The actual cause of the performance issue is Entity Framework. I won’t go into all the details (because we don’t need them) but when you first query a database using Entity Framework a model needs to be created. This takes a lot of time. There is also some connection pooling, which also takes time. The important point here is that the model and the connection pool are cached and kept alive within the context of an AppDomain.

To make it more complicated, K2 has designed their application such that each process definition has it’s own AppDomain (which is of course the correct design decision). This means that if you have 2 different process definitions referencing the same class library you will have 2 AppDomains, 2 cached models, 2 connection pools. K2 also unloads and reloads these AppDomains after 15 minutes of inactivity, so if nothing happens with ProcessA for 16 minutes and you start a new one, you’re going to see that long delay.

So what can you do? Unfortunately, not much. I tried hosting our class library in a WCF service in the hope that this would keep the model cached outside of our K2 process’ AppDomain but it doesn’t work that way. You can’t use a ‘Keep Alive’ script because the script would need to be run in the same AppDomain as the model, and unless you’re going to build that into each of your K2 processes that’s not going to work. You can, however, try one of the following options:

  1. Start your processes asynchronously. We may have to go this route.
  2. Set the AppDomainUnload timeout value to something greater than 15 minutes. To do this you set the “AppDomainUnload” setting in the \Host Server\bin\K2Server.setup file on the K2 server to something like 8 hours: <AppDomainUnload Minutes=”480″ />

Setting AppDomainUnload to a value of “0” turns off AppDomain unloading completely, but don’t ever do this or you will run into memory usage problems.

I hope this helps someone.


Posted by on January 25, 2012 in .Net, K2 API


Windows Workflow Foundation Episode 3

Welcome to part 3 of Windows Workflow Foundation 4 for K2 developers. In part 1 I outlined the differences between how you should approach K2 and WF4, and in part 2 I spoke about the different designer options you have and mentioned a few best practices. There is 1 more topic I need to discuss before we can start building our workflow engine, and that is the topic of workflow persistance.

As I mentioned, WF4 is not a service – it’s baked into your application, so when your application thread stops running, so does your workflow. If we’re going to be turning this into a service then we’re going to need a way of maintaining workflow state. We’ll be doing this by serialising our workflows and persisting them in a database.

Step 1 – Creating the SQL Store

Microsoft have made this simple for us by providing us with the store to use. Assuming you have .Net 4 installed on your machine you should be able to navigate to [C:\Windows\Microsoft.NET\Framework\v4.0.30319\SQL\en] and you’ll see a bunch of SQL scripts. 2 of these are important to us:

  • SqlWorkflowInstanceStoreSchema.sql – This script will provide you with the schema you need to persist your workflow
  • SqlWorkflowInstanceStoreLogic.sql – This adds the logic to the schema

Start by creating a new database called WorkflowInstanceStore, then execute the above scripts against the database. That’s it – you’re done.

Step 2 – Set your workflows up to automatically persist

What we want to accomplish here is to get your workflows to automatically persist whenever the workflow is idle, such as when it’s waiting for a user to do something. We will also need a way of referencing this workflow again at a later stage (so you’ll need to save the workflow ID) and you’ll also need to be able to tell the workflow which activity you’re wanting to resume, and for this you need a Bookmark (a bookmark is just a string). I’ll show you how to get your workflow to automatically persist itself to the database by creating an activity which listens for the idle event. Here’s what you will do (we will implement this later – this is just pseudocode to get the idea across)

  1. Create a code activity which inherits from NativeActivity and call it WaitForSomething
  2. Add this method in:
  3. protected override bool CanInduceIdle
       get { return true; }
  4. Create a new workflow and somewhere in the workflow you should add your WaitForSomething activity. Now when you start your workflow, you’ll do it this way:
  5. WorkflowApplication wfApp = new WorkflowApplication(MyWorkflowDefinition);
    SqlWorkflowInstanceStore store = new SqlWorkflowInstanceStore(connectionString);
    wfApp.InstanceStore = store;
    wfApp.PersistableIdle = delegate(WorkflowApplicationIdleEventArgs e)
       // Instruct the runtime to persist and unload the workflow.
       return PersistableIdleAction.Unload;

I’ll stop here and recap what we’ve done. Firstly, we created an activity which raises a PersistableIdle event. Then we added this activity to a workflow and created a delegate to handle the PersistableIdle event. The event returns Unload’ which tells WF4 to unload the process from memory and persist it to the sql store referenced by the connection string we provided.

Right – I think I’ll stop there because if we go any further without doing some examples we’re going to get lost. In the next post we will start building our project and I’ll include the project for you to download so you can follow along.

As always, please send any and all questions my way.

Leave a comment

Posted by on December 15, 2011 in Windows Workflow Foundation


Windows Workflow Foundation Episode 2

[Yes – I know I only published episode 1 a few minutes ago, but I wrote these 2 posts on a 14 hour plane trip]

Welcome back to WF4 for K2 developers. In my earlier post I spoke about the differences between K2 and WF4. In this post I’ll speak about the workflow designer options you have and some best practices.

There are a number of ways you can build your workflows, each with pros and cons

  1. Firstly there’s the graphical designer. All you K2 gurus will leap straight into this one, but keep in mind that there are no line rules – you have to use decision points to build in your business logic. Also, there are several options for the graphical designer – you could use a sequence, a flow chart, a state machine, or any combination of these. Again, each of these have their pros and cons, but if you’re looking for something similar to the K2 canvas then try a flow chart. True to style, the graphical designer is purely a front end to generate XAML, which leads me to the second option…
  2. XAML. If you’re happy coding in XAML then you can do it all in XAML. Personally I don’t see the point…
  3. Code. Anything you can do from the designer you can do in code, and I have to say that it’s very well thought out – designing a workflow in code is simple, quick and easy to follow (to a point).

So which should you use? The answer is, of course, that it depends on the workflow you’re designing. Here are some guidelines which should help you choose…

  • For small workflows, use code. There is a VERY tiny performance improvement, it’s easy to edit the functionality, but the biggest advantage is the ability to use the built-in refactoring functionality. If you’re using XAML or the designer, you’re refactoring manually. Of course if your workflow is coded then you can step through it in the debugger as well.
  • For all other workflows, use the visual designer. As nice as the code is, it quickly becomes difficult to follow the business logic and you get lost in a sea of confusion.
  • For those of you in the real world, you will use both. When I am building a WF4 app I create a project to hold commonly-used custom activities which I have written. Some of these are in code, some are in XAML, but when I build the actual workflow it’s always in the designer. The only thing I would say is that there’s no point in writing a workflow in XAML from scratch. It’s like coding C# in notepad. Yes, you can do it, but why would you?
  • We have an approach in Dynamyx which is “everything is a framework”. Try make your approach to your workflows as generic as possible. Look for reusable sections and turn those into separate custom activities. Build up a library of generic activities for use in other projects. When we start building our workflow engine you’ll see what I mean.

Finally, if you’re using code activities you will be writing classes which will inherit from one of several base classes. By default you should either use CodeActivity or it’s asynchronous counterpart since these are lightweight. If you need more advanced features (like automatic persistence when your activity is idle) then you need to inherit from NativeActivity. There are others, but these cover 99% of the cases you will see.

In the next post I’ll speak about workflow persistence and events since we’ll need a good understanding of how this works before we you can dive into developing the workflow engine.

Leave a comment

Posted by on November 30, 2011 in Windows Workflow Foundation


Windows Workflow Foundation Episode 1

A wise man once said that if all you have is a hammer then everything you see is a nail. A wiser man (Scott Adams) pointed out that if all you have is a hammer then you’re not going to be thinking about nails because you’ll be using your hammer to hide your naughty bits on account of you being naked (seeing as all you have is a hammer). I reckon they were both right. The point is that even as a K2 consultant it’s worth knowing what the other workflow vendors are doing in the workflow world.

I have been working on a workflow project where K2 wasn’t an option. I dutifully looked at other workflow options (there are a few open source options out there) but none of them ticked enough boxes. Eventually I decided that my best option was to write my own workflow engine using windows workflow foundation.

Since you’re reading this blog I’ll assume that you are either a K2 developer, or you have no life at all and need to get out more. I’ll give you the benefit of the doubt. I’m going to be writing a series of blogs here describing windows workflow foundation from the point of view of a K2 developer. Windows workflow foundation 4 is not difficult, but if you approach it like a less-featured K2 (which is what I did) you’re going to get nowhere. Hopefully this will make it easier.

In this blog I’ll be outlining the differences between WF4 and K2 and how you should approach them differently. In later posts I’ll walk you through the process of wrapping WF4 up into a workable workflow engine.

So if you’re a K2 developer looking at WF4 for the first time, these are the things you need to keep in mind or you will get totally lost:

  1. In WF4 there is no difference between an activity and a workflow. When you start a workflow you are actually starting a new activity. You can create complex activities (similarly to how you can create a usercontrol which contains other controls) but they all become part of the same workflow. So there’s no concept of an IPC call here. In fact, you can create 2 complex workflows and add those workflows as child activities in a parent workflow, and all the workflows will have the same instance GUID assigned to them. So when I speak about a workflow I could be speaking about an activity or a group of activities as well.
  2. In WF4 there’s no concept of tasks being assigned to people. There’s no AD integration. Basically there are server tasks and that’s it.
  3. In WF4 the workflow is baked into the application. This means that when your application is running, your workflow is alive and accessible. When your app is closed, the workflow is closed as well. This is a biggie. There’s no concept of making a call to a workflow engine. You can persist your workflow to a database in order to save state but if you close your application and hope that an escalation rule will fire in the background you’reout of luck. Of course you can create a WCF service to host your workflow app, or you could use AppFabric. We will obviously do something along these lines.

Those are the main differences – if you can get your head around these points you’re well on your way to understanding WF4. The next post will speak about the workflow designer and mention some best practices. If you have any questions during the series, please feel free to ask…