Author Archives: Trent Jacobs

Windows Workflow Foundation Episode 1

A wise man once said that if all you have is a hammer then everything you see is a nail. A wiser man (Scott Adams) pointed out that if all you have is a hammer then you’re not going to be thinking about nails because you’ll be using your hammer to hide your naughty bits on account of you being naked (seeing as all you have is a hammer). I reckon they were both right. The point is that even as a K2 consultant it’s worth knowing what the other workflow vendors are doing in the workflow world.

I have been working on a workflow project where K2 wasn’t an option. I dutifully looked at other workflow options (there are a few open source options out there) but none of them ticked enough boxes. Eventually I decided that my best option was to write my own workflow engine using windows workflow foundation.

Since you’re reading this blog I’ll assume that you are either a K2 developer, or you have no life at all and need to get out more. I’ll give you the benefit of the doubt. I’m going to be writing a series of blogs here describing windows workflow foundation from the point of view of a K2 developer. Windows workflow foundation 4 is not difficult, but if you approach it like a less-featured K2 (which is what I did) you’re going to get nowhere. Hopefully this will make it easier.

In this blog I’ll be outlining the differences between WF4 and K2 and how you should approach them differently. In later posts I’ll walk you through the process of wrapping WF4 up into a workable workflow engine.

So if you’re a K2 developer looking at WF4 for the first time, these are the things you need to keep in mind or you will get totally lost:

  1. In WF4 there is no difference between an activity and a workflow. When you start a workflow you are actually starting a new activity. You can create complex activities (similarly to how you can create a usercontrol which contains other controls) but they all become part of the same workflow. So there’s no concept of an IPC call here. In fact, you can create 2 complex workflows and add those workflows as child activities in a parent workflow, and all the workflows will have the same instance GUID assigned to them. So when I speak about a workflow I could be speaking about an activity or a group of activities as well.
  2. In WF4 there’s no concept of tasks being assigned to people. There’s no AD integration. Basically there are server tasks and that’s it.
  3. In WF4 the workflow is baked into the application. This means that when your application is running, your workflow is alive and accessible. When your app is closed, the workflow is closed as well. This is a biggie. There’s no concept of making a call to a workflow engine. You can persist your workflow to a database in order to save state but if you close your application and hope that an escalation rule will fire in the background you’reout of luck. Of course you can create a WCF service to host your workflow app, or you could use AppFabric. We will obviously do something along these lines.

Those are the main differences – if you can get your head around these points you’re well on your way to understanding WF4. The next post will speak about the workflow designer and mention some best practices. If you have any questions during the series, please feel free to ask…

Leave a comment

Posted by on November 30, 2011 in Windows Workflow Foundation


Getting child process IDs after IPC Event

I like IPC events and use them all the time – if you design your processes well with IPC events then you can keep them small and manageable, and each time you get to an IPC event your process is bumped up to the latest version so it’s easier to propagate bug fixes. However I recently needed to keep track of the child process IDs and I realised that you can’t do this using the IPC event wizard. I tried to map the child process ID to an activity level data field in the parent process but unfortunately I couldn’t add the process ID field from the child process as a source.

So how can you do it?

My implementation was to pass the parent process ID into the child process, and then write these IDs to a database in the first activity of the child process:

Child Process

Another way to do it is to include the parent process instance ID in the folio of the child process, then when you retrieve the worklist you can filter the worklist by folio. This is a simple way of doing it but isn’t the most efficient.

Hope that helps.


Posted by on July 22, 2011 in K2 Workflows


K2.ActivityInstanceDestination.User is NULL?????

This one caught me by surprise! I needed to build up a CSV list of email addresses for all previous task owners for an activity, so the simplest way would be to just append the email address each time we get to the task, right? Apparently not.

Here’s my code:

K2.ProcessInstance.DataFields["AllApproversEmailAddresses"].Value = K2.ActivityInstanceDestination.User.Email;

Nothing too complicated. However, running it gave me the dreaded (and oh so informative) “Object reference not set to an instance of an object”. So what’s happening?

Well, it’s actually not that complicated. Basically when you create any activity the default destination rule is “Plan Just Once” (in fact you don’t even have the option of choosing anything else unless you run the Activity wizard in advanced mode.) Anyway, when this option is chosen the destination instance isn’t actually created, which means that K2.ActivityInstanceDestination will always return null.

To get around this issue you will need to do the following:

  • Right-click the activity to open the activity properties.
  • Click on the Destination Rule Options tab, and then click the ‘Back’ button on the wizard.
  • Check the box which says “Run this wizard in advanced mode”, then click “Next”.
  • You’ll see that “Plan per slot (no destinations)” is selected. Choose “All at once” and then click through the rest of the wizard.
  • Deploy and test.

That’s it. Hope that helps someone!


Posted by on June 10, 2011 in K2 Workflows


28025 Failed to Start IPC

I came across this error again and it reminded me that I’ve come across it a number of times before, so I’ll list the possible issues here:

  1. You don’t have rights to start the child process. When you set up the IPC call the default authentication is set to ‘Integrated’ and I often forget to change this to ‘Impersonate’. When I get this 28025 error the first thing I check is whether or not I’ve set the authentication to impersonate and that the user has start rights on the child process.
  2. The data field you’re setting in the child process doesn’t exist. I came across this when I created my data field in the child process while I was going through the wizard to set up the IPC call. The problem was when I finished the wizard and everything compiled and exported, the data field I created hadn’t actually been created. One day if I have time I’ll raise a bug for this…
  3. You’re passing data with incompatible types. You can’t pass a string from the parent process to an int data field in the child process, for example.
Leave a comment

Posted by on June 7, 2011 in K2 Workflows


Are you getting the most value from K2 blackpearl?

As a BPM architect and business analyst I’m in the fairly unique position of having seen how companies all over the world are using K2 blackpearl. It would be nice to be able to say that they are all going about it the right way, but I find that most companies aren’t. That’s not in any way judging anyone, but in reality the best way of using any techology comes from experience, and in most cases companies don’t have the time (or budget) to spend getting experience and building up the expertise to tackle a workflow project using industry best practices.

This isn’t going to be a K2 blackpearl best practices guide, but here are a few checks and balances to make sure you’re on the right track:

1. Get results fast.

As a rule of thumb I feel that I and my team should be able to deliver a useable, valuable workflow project within 3 months. If it takes longer than that then I have a good look at the project – sometimes it’s a large project and there’s a good reason for a longer timescale – but our rule of thumb is 3 months.

2. Choose the right technology.

Spend the time at the start of the project to choose the right tecchnology to fit around K2. In many cases this is simple – you can safely go straight to Microsoft SQL Server and know you’ve chosen well. The business part is usually simple as well – you’re going to be using .Net. The choice of UI technology, however, is critical. Are you going to use ASP.Net? Forms? InfoPath? SharePoint lists? All have their pros and cons, but if you make the wrong choice here you’re going to pay for it over and over again. This leads me to…

3. Fit the requirements to the tech, not the other way round.

Once you’ve chosen your tech, accept the limitations and live with them. There’s no silver bullet technology to solve all your problems. The first thing you want to do at the start of the project is print out a big sign which says “it is what it is” and hang it on the wall in the meeting room.

The most common time I see people struggling with this is with InfoPath. InfoPath is a great technology for getting results fast. I can get a great looking UI done with back end integration in almost no time at all, especially with SmartObject technology. But it’s not ASP.Net. I’ve seen people take InfoPath so far beyond what it’s meant to do that the result is completely un-useable. But I’ve also seen a company which asked “what is InfoPath good at”, they focused on those areas and the result is probably the most successful K2 project I’ve ever seen.

4. Invest time in the API.

Blackpearl has a great API. It deserves to be treated like any API, i.e. don’t just start coding and hope for the best – dig deep first and then build up a re-useable framework using the API. We do this for every project we do, and now we have a well-written, well-documented re-useable, TESTED, generic K2 framework to pop into any project. And no – our framework isnt’ for sale 🙂

5. Buy K2 skills into your project.

I once worked for a .Net startup in London. The company was started by about 15 consultants from a major consultancy company who thought that they could send their consultants on a 2 week .Net course and then write enterprise software. After a few months they hired me. The first thing I told them to do was to delete everything and start again from scratch. What a way to make friends!

My point here is that there is a very experienced community out there, and it’s worth investing in getting it done right. This is possibly the most important point I can make. Buy in the right skills. You have a number of options:

  1. Hire an architect with K2 experience.
  2. Contact K2 directly and ask them to recommend some help.
  3. Contact a K2 partner.

You don’t need to outsource your whole project. At the very least, if your budget is tight you can buy 2 weeks of time from a recommended K2 partner and get your architecture and design correct from day 1. We do this all the time for people. If you start off in the wrong direction you will pay for it in the long run. Again, and again, and again, and again…

Hope that helps!

Leave a comment

Posted by on April 29, 2011 in SmartObjects


What do you want to see here?

Whenever I come across something interesting I like to let people know. They say it takes a wise man learn from their mistakes, but I believe the people who really get it right are the people who know how to learn from other peoples’ mistakes. So I used to regularly write up my thoughts and email them to work colleagues. Those emails then because this blog and now I’ve got people from 30 countries reading it. My biggest challenge now is to keep making enough mistakes so that I have something new and interesting to write about 🙂

Anyway, this blog isn’t meant to be a forum – K2 Underground does an excellent job of filling that gap. However, if there’s a topic you’d like to see covered, let me know and I’ll see about writing up something about it…

Leave a comment

Posted by on April 22, 2011 in SmartObjects


Pass through authentication (AKA the end of kerberos?)

K2 released pass through authentication in 1290 (the version, not the year) and many people are hailing it’s coming as the end of Kerberos trouble. In many ways they are correct, but not all. This is a VERY brief outline of what pass througth authentication is (and isn’t) which will hopefully let you decide if this is something you need to consider in your environment.

Kerberos, as I’m sure you know, is a black art which is only understood by a select group of monks who live in complete isolation and probably charge ridiculous consulting rates. For the rest of us, we have to make do with hours (or days) of fighting with Kerberos before we get a fully working architecture. In simple terms, when you make a call from one server to another your credentials are passed on to the next server so that the next server knows who you are. For example, if a user (named James) makes a request which hits your web server and your web server needs to make a call to your application server, this works just fine because your application server knows the call originated with James. However the default way of passing these credentials on (NTLM) only passes them on for 1 ‘hop’. In the example I gave, if the application server needed to make a call to a database server the database server wouldn’t have a clue who you the call originated with, and instead of the caller being identified as James the caller would be anonymous. Kerberos solves this issue by allowing your credentials to be passed along as many hops as you like.

K2’s pass through authentication lets you bypass the need for Kerberos by basically doing the following:

  1. User makes a call to the web server which in turn makes a call to the K2 server.
  2. The K2 server needs to make a call to the database. However, when it does the database server identifies the user as ‘anonymous’.
  3. K2 realises that Kerberos isn’t enabled, so it tells the database server who the user is and then makes the call again, and voila – no more anonymous user.

This is great, but you need to realise that this only works for servers which have k2 components on them. If there are 2 hops anywhere in the call stack and there are no K2 components, then pass through authentication will (obviously) not work and you’ll need to have Kerveros enabled anyway.

I have confirmed that this extra bit of handshaking will only happen when you’re opening a connection – once the connection is open there’s no extra chatter to slow things down.

p.s. a hop is actually across a security boundary, not just across a server, but for the sake of illustration it makes sense to call a server a hop.


Posted by on April 20, 2011 in Extending K2, Security


Error while deploying SmartObjects to different environments

I keep seeing this error pop up in projects so I’ll add it here as a gotcha. Basically you have a fully working project which has been completely tested, and then just when you’re confident that all is working you deploy to the live environment and boom – you get this error:

Error 1 Deploy SmartObjects: Task error: SmartObject Server Exception: Dependancy could not be created: System.Exception: Dependancy could not be created. Parent does not exist in this environment.

This really is no big deal. To understand the error though, you have to understand the relationship between service objects and SmartObjects. SmartObjects can’t exist on their own – they need a service object to live on top of (consume, if you’re going to get picky). When you call a SmartObject method, the SmartObject calls a method in the underlying service object and it’s the service object which interfaces between the K2 world and the real world. You get some inkling of this relationship when you create the SmartObject in the first place – you registered your service object and then created the SmartObject on top of it.

What isn’t immediately obvious is the fact that the link between the SmartObject and the service object is a GUID, not the name of the service object. This GUID is assigned to the service object when you register it, so when you registered your service object on the live server it was assigned a new GUID, so when you tried to deploy the SmartObject the service object it needs (i.e. the parent) isn’t there.

Generally when you deploy a service object for the first time you want to make careful note of the GUID it’s assigned, and include that GUID in your deployment documentation (yes – you SHOULD be writing deployment documents!). Now when you register your service object on a new K2 server, don’t use the default GUID – you should copy and paste the GUID from your deployment document. Now when you deploy your SmartObject it will no longer be an orphan.



OK, I know this has absolutely nothing to do with K2 blackpearl but it’s one of my favourite Dilberts ever. Had to share… 🙂

Leave a comment

Posted by on April 14, 2011 in SmartObjects


Using the API 101

I was at one of the K2 Roadshows recently and the main question on everyone’s mind was, surprisingly, how to best go about using K2’s API. This isn’t going to be an in-depth answer but hopefully it will give people an idea of where to start.

First we need to clear up some concepts. The K2 Host Server is actually made up of a number of components or Hosted Services. Some examples of hosted services are:

  • Environment Library Server – this is the hosted service you query every time you want the environment fields for a particular environment
  • Licensing Management Server – The name speaks for itself
  • Event Bus Server
  • etc.

There are 2 hosted services which are important to get your head around, and these are:

  • Workflow Server – To do all things workflow-related.
  • SmartObject Server – To do all things SmartObject related.

Keep these 2 hosted services in mind because almost everything you do with the K2 API will be essentially communicating with these 2 hosted services. So how do we communicate?

Communication with the hosted services is always via a TCP connection. By default you’re going to be using port 5555, the only exception being the workflow server which uses port 5252. These ports can be configured and changed during the installation, but you wouldn’t want to.

So in a nutshell, here’s what you do to use the API:

  1. Open a TCP connection to the hosted service you need to connect to
  2. Make the call
  3. Close the TCP connection

Here’s an example of using the API to get the worklist of the person who is currently logged in (something you would need to do if you were building a custom task list, for example):

string K2ServerName = "localhost";
SourceCode.Workflow.Client.Connection connection = new SourceCode.Workflow.Client.Connection();
SourceCode.Workflow.Client.Worklist workList = connection.OpenWorklist();

That’s about it, but I want to mention some best practices. First of all, re-use the same connection for all the calls you need to make, as shown below:

string K2ServerName = "localhost";
SourceCode.Workflow.Client.Connection connection = new SourceCode.Workflow.Client.Connection();
...Make many calls using the active connection...

Secondly, if you encounter an exception while making the API calls you will never get to the line which closes the connection, which will leave you with open connections. This is obviously not what you want, so you should create your connection using a ‘Using’ statement so that the connections are closed for you as soon as you’re done with them (all K2 connections implement BaseAPIConnection which in turns implements IDisposable). Here’s how you do that:

string K2ServerName = "localhost";
using (SourceCode.Workflow.Client.Connection connection = new SourceCode.Workflow.Client.Connection())
    SourceCode.Workflow.Client.Worklist workList = connection.OpenWorklist();

Hope that helps…

Leave a comment

Posted by on April 13, 2011 in K2 API