Marlon Dumas raises couple of excellent objections to my earlier post BPMN to BPEL Round-tripping. His objections are so worthy of analysis that instead of replying to his comments, I am writing this post.

Marlon’s objections may be put in two buckets:

1. Semantic Mismatches Between BPMN and BPEL

Marlon writes:

Another thing which people seem to ignore (Ismael first in the line), is that BPMN allows for models where activities are connected in arbitrarty manners, with almost no restrictions, whereas BPEL is largely based on block-structured activities.

I could not agree more. When I did this first at Siebel, we decided to allow arbitrary BPMN. Theoretically, this was possible as we generated BPEL transparently to the user (just like Siebel users never see SQL, which is generated from Siebel metadata). In practice, this meant complex graph-to-tree and other semantic conversions, which, I suspect, the engineers loved but was a constant source of frustration to me, as a Product Manager, as it complicated adding support of semantics already supported on either end.

My current belief aligns with what Marlon goes on to propose:

Maybe what vendors should do is to clearly document and perhaps even enforce in their tools the restrictions they make on the BPMN models they can round-trip (e.g. classes of BPMN models as discussed in Bruce Silver’s blogs.

And this stems not only from the implementation difficulties described above, but also
from my sincere belief that this provides for high-fidelity preservation of business intent. The cost is some lack in flexibility for the business analyst; however, in my experience, educating business analysts about the limitations of what is executable and what is not, actually clarifies their thinking on what they want their processes to do.

Although I did not want this post to be about Oracle products, since the original post was, I will mention here that this is the approach we are taking at Oracle. Our Business Analyst tool will validate the analyst processes marked to be made executable to ensure that they meet constraints imposed by BPEL semantics.

However, this begs the question of why use BPMN at all, if it is constrained by BPEL. The answer is not so much that BPEL lacks a notation standard but more that the business analyst models are multi-layered and not all layers are meant to be executed. The freedom allowed by BPMN is valuable for the higher layers; for the executable layers, the analyst has more constraints but nevertheless the same notation.

2. Is Refinement a Realistic Model

Marlon questions mapping between logical and physical activities:

I’ve seen BPMN models produced by domain analysts which contain tasks that you would simply not automate at all (i.e. one task in the logical model = zero tasks in the executable model). I’ve also seen cases where a few steps are added in the technical model (especially data extraction steps) which the domain analyst wouldn’t care less about. Moreover, these additional steps in the executable model are not always related to a uniquely identifiable activity in the “logical” model.

I agree with both scenarios. In fact, some common examples of the two are:

a. The analyst model may model human activities that do not involve computer interactions – such as research customer, file document in cabinet etc.

b. The implementation model needs things like initialize variable etc.

However, this is not inconsistent with the notion of refinement. Logical activities may be refined as empty activities in the implementation model. Further the implementation model is not constrained to add implementation steps that do not expand upon a logical activity. Implementation activities may be added anywhere in the process as long as the flow in the analyst model is not violated. (I think the confusion stems from some process modeling tools which allow a developer to only add code behind empty boxes put in by a Business Analyst.) Of course it helps from a maintenance perspective if the additions in the implementation model are some how anchored to activities in the analyst model, as it facilitates updating the implementation model while correctly preserving the additions when the analyst model changes. However, “within” is not the only possible anchoring relationship – “before” and “after” are also possible.

Marlon also presents some evidence from a project at Danske bank in Denmark:

“…an activity was described as “create all cards”. When the developer should implement such an activity, he had to consider if a new service should be developed to create a bunch of cards, if an existing service for creating one card should be called several times in a loop structure, and what should be done in case of failures when creating the cards? Such decisions are not implementation issues; it is decisions that should have been modeled in details in the [high-level] model.”
And also that:
“Activities in a process may depend on each other. For instance, an account must be created before creating a card. Such dependencies were not always described explicitly and the developer had to figure out how to organize the control flow. These dependencies should have been described in the [high-level] model.”
And yet another example:
“Some important information was neither defined by the analyst, nor by the architect. The architect had not considered which data to use when defining service invocations or user interface based activities. Both activity types may require data that is not present and that has to be retrieved from somewhere else.”

I am not sure I read the above evidence the same way as Marlon. To me this seems to be more about a specific instance rather than about the methodology.

Let’s start with the easy one – the second observation about dependency description in the analyst model. This is exactly what refinement is for – analysts capture the high level flow, which of course includes dependencies, and developers implement it. The fact this was not done does not appear to be a criticism of the refinement model. Also, the refinement model, in fact, enables the business analyst and developer to work out such issues; note it is not a one time hand-off but a multi hand-shake interaction. In fact, I would say that the above project will benefit from using the Refinement model.

The third observation about data is again not a criticism of the Refinement model. The Refinement model supports refinement of data the same way as process. Business Analyst models business view of data (entities, their key attributes, and relations, etc.) and developers refine them to fully fleshed out schemas.

The first observation is an interesting one. First, I agree that there is a continuum between what an analyst should do and what a developer should do. Every organization may find it at a different point in the continuum depending upon the skills and roles. However, having business analysts sweat about loops seems to be a bit too much. There is a distinction between functional developers (not hard core developers) and business analysts – the former should use the implementation level tools, most of which come today with graphical drag and drop modeling tools.

Another Objection to Refinement

The best argument I have heard to date against Refinement is that business models and implementation models are entirely different entities – one deals with people, organization, etc. while the other deals with systems and applications.

I completely agree – business models are multi-layered and many layers, such as value-chains, do not map to execution. However, the point is that business increasingly wants to have high-fidelity control over the executable processes; and Refinement applies to such processes.

In Summary

I am a strong believer in the Refinement model. The basic premise of Refinement is not new – applications have been specified and built according to this model for a long time, where Business Analysts (Product Managers) have mocked up User Interfaces and developers have then implemented them. In fact when I think Refinement model, I think Visual Basic. However, what has been missing in the earlier applications is the ability to preserve the Analyst view and keep it current, as well as the ability for the Analyst to continue making changes. In the world of BPM, we have the opportunity and the requirement to be address these issues, as I discussed in my original post.

In his otherwise excellent post Sharing the BPEL Love, Ismael gets Oracle’s BPMN story wrong. In this post, I explain how we, at Oracle, are addressing the BPMN to BPEL round-tripping.

In his blog Ismael says:

BPMN is where Intalio and Oracle differ. In Oracle’s case, they do not have their own process modeler, and had to license IDS Scheer ARIS. Problem is, ARIS is not fully integrated with Oracle BPEL Process Manager, hence a lot of code has to be written manually. So if you want BPMN and BPEL—and you should—but do not want to write code, give Intalio a try.

Firstly, part of the misunderstanding is due to the fact that the features outlined in this post have not yet been released; they will be in Beta soon. Nevertheless, it is important to point out that BPMN or no BPMN, Oracle’s BPEL product is a Zero Code solution.

Now to explain how Oracle is integrating the Business Analyst modeling in Aris with its BPEL Process Manager and how that provides compelling value to our customers:

Oracle’s strategy is based on our experience and belief that the business analysts and developers have different perspectives and requirements, and need different tools. However, being an applications company, we also realize the need to seamlessly integrate the life-cycle with continuous iterations. Our solution to achieve this is based on three pillars:

  1. Shared Logical Model (BPMN): The hand-off from business to IT happens as a shared logical model (expressed in BPMN). This model may be thought of as the contract between the business and IT; it is the lowest level of modeling for business analysts and a high level specification for the IT developer. This model is supported in both Business Analyst tool (IDS Scheer Aris) and Developer tool (Oracle BPEL PM). Although this will be usually created in the Business Analyst tool, we intend to support creation of this within the Developer tooling as well.
  2. Refinement from Logical to Executable: The Developer makes the logical model executable by refining it with more details. Refinement may involve providing physical bindings and transformations, as well as adding additional processing steps within what the Business Analyst considers one step. Usually, Refinement respects the logical model as a constraint, thus enabling the logical model to evolve in parallel to refinement. However, we also provide Developers the flexibility to change the logical model; such changes are then submitted back to the Business Analyst as a proposal for improvement.
  3. Bi-directional Sync: As mentioned before, in most cases the Refinement metaphor enables the logical and executable models to evolve in parallel. However, in some situations, conflict may arise; for example, when the Business Analyst deletes a step from the logical model. To facilitate synchronization in such situations, we provide visual diff-and-merge tool. Also, as mentioned above, although the changes are usually synchronized from Business to Developer, we support bi-directional synchronization.

The benefits of this approach are:

  1. Right Tool for the Right User: Instead of trying to provide an in-between tool that solves neither users’ requirements well, we provide the best tool for each user. Also, Business Analyst modeling is a serious task and tools like IDS Scheer Aris provide significant modeling and analysis capabilities beyond support for BPMN.
  2. Right Abstractions: BPMN or no BPMN, when you model all the details needed to make a process executable, you have a rather detailed model that loses the business audience. In my experience working with Siebel Product Managers (proxy for Business Analysts), the issue is not of a notation – they will learn whatever you will teach them. The issue is to provide them the right level of abstraction. With our Refinement approach, we are providing the Business Analysts the level of abstraction that they feel comfortable with.

We are very excited about the above features and believe this will address the Business (BPMN) to Developer (BPEL) round-tripping problem in a significant way. These features will be available as Beta soon.

Being responsible for BPM at Siebel, I began to come to the realization that traditional BPM does not address the process requirements for information workers. Some of the requirements such as collaboration and pervasive analytics are now well understood and being addressed. However, some other requirements, as I saw them, do not get attention. In this post, I summarize some such requirements. It is based on a presentation I made to my management – David Bernstein and Peter Lim – about 1.5 years back.

Defining the Problem

As is commonly said, Information Workers drive the business processes instead of being driven by the processes. What that means, for example, is that no company, in its right mind, will force a stringent step-by-step selling process on its sales people. More technically speaking, the Information Worker business processes are too non-linear to be orchestrated.

This is commonly interpreted to mean that BPM for information workers, means collaboration, especially of the ad-hoc kind, and pockets of BPM automation. For example, in the Sales process, the Quote generation process, which is a small piece of the overall process, may be implemented as a BPM process.

However, I think there is an opportunity to do better. Let’s start by defining the objectives of BPM for Information Workers (excluding those that are addressed by today’s solutions):

  1. Guide users to the most valuable activities, that is activities that maximize objectives, leveraging organizational best practices and learned insights.
  2. Enforce policies and constraints.
  3. Enable loosely coupled interactions between stake holders (sort of implicit collaboration). For example, if the Sales Rep schedules a meeting with the customer executive, the Sales Engineer is automatically steered towards closing out all outstanding questions from the executive’s reports.

The Process from User Perspective

To explain my proposed solution, let me start by painting the picture from a end user perspective. Following is a mock up of an Opportunity application (to be accurate the side bar is a mock up overlaid on screen shot of the Nexus application):

Mock up of Opportunity application

In the above picture, pay attention to the Activities in the sidebar. The Sales Rep’s view of the process is essentially a list of activities along with their recommended values (the bar with green boxes). This list and the recommendations are not static. Some activities that have prerequisites are not shown until the prerequisites are satisfied. For example, Create Quote activity may become available only after Perform Discovery activity is complete. Also, the recommendations change as various events happen. For example, if the Customer Browsed the web site for a demo, the Schedule Demo activity may become a highly recommended activity.

In summary, as a Sales Rep, I am not constrained by the process; however, I am not flying blind either.

The Process from Implementation Perspective

From an implementation perspective, the business process, like most other business processes, is a collection of Tasks. However, these Tasks are not sequenced; instead, each Task is associated with rules which specify when it becomes available, when and whether it is required, and when it becomes unavailable.

Also, associated with each Task are rules to determine its value at any point of time.

From a BPM technology perspective, it requires:

  • A good event framework to tie every thing together. Events trigger activation, deactivation, and other state changes of the Tasks.
  • Business Rules to capture the constraints and policies associated with the Task state changes
  • A recommendation engine; preferably, one based on Bayesian models (for example, Sigma Dynamics that Siebel OEMed and Oracle acquired)

Summary

While pockets of automation, collaboration, and analytics provide value to Information Worker business processes, there is a potential for BPM to do more. BPM can become the Information Worker’s friend, guiding them without constraining them. In this quest, Events, Rules, and Bayesian model based predictive engines, may be more important technologies than traditional process orchestration.

BPM as Service

March 2, 2007

Last month, two BPM vendors announced BPM as Service offering. First, Lombardi announced its hosted modeling offering, Lombardi Blueprint, which Bruce Silver covered here. Then, Appian announced a hosted version of its product, which again Bruce covered here. While these initiatives towards BPM as a Service are both welcome and inevitable, I doubt the true value of either. In fact, I find myself agreeing with Ismael’s blog Who Needs BPM as a Service on multiple counts.

The Lombardi Announcement

First, I doubt the wisdom of offering just Modeling as a hosted service. What will the users get out of it – more paper? Also, I find their positioning on usability as a distraction from the main issues. While Power Point is every business user’s dream, the main issue today is not the ease of use of modeling tool, but round tripping with the implementation model and operational results. Moreover, they seem to be disregarding the fact that analyst tools such as IDS Scheer’s Aris provide serious analysis and modeling capabilities.

In summary, cool tool. However, if you want more paper, may as well use Power Point itself.

The Appian Announcement

I confess I have not followed this closely. However, the following from Bruce’s blog caught my attention:

Human-centric processes that don’t require high-performance integration with ERP or other backend apps behind the firewall are the low-hanging fruit for Appian Anywhere.

In my experience, BPM is never a stand alone animal. It either integrates with existing applications or ends up building new application components including data objects and user interfaces around it. Since the first seems ruled out, the second would be the targeted use cases. In that case, I agree with Ismael, that companies like Coghead have a more compelling story. If truly no application components are needed, you essentially have a collaboration application. Again, companies like Ning or Jotspot (now Google) are the way to go.

In summary, interesting. However, what is the use case? And why not Coghead, Ning, or Jotspot?

Follow

Get every new post delivered to your Inbox.