Saturday, November 3, 2012

Building a Reporting Tool on Top of Analysis Services: Understanding OLAP


In order to effectively build a reporting tool, one first needs to understand On-Lyne Analytical Processing (OLAP).


1.1  What is OLAP?

OLAP can be viewed as a method of aggregating data in a way that facilitates the construction and interpretation of queries on multidimensional databases. OLAP tools give business analysts with reduced knowledge of query languages the ability to explore complex databases because it structures the data according to dimensions, hierarchies, levels and measures that are easily transposed to the nature of the business they are analyzing. Instead of dealing with complex notions like foreign keys, indexes and primary keys, the user intercepts dimensions like “Product” and “Time” and drills down through levels according to the desired degree of detail.

1.2  How does OLAP work?

OLAP relies on multidimensional cubes to aggregate data and the queries are defined considering the following principles:
·         Dimensions: Several defined areas that make sense to group the data. For example, it makes sense to aggregate all the months, weeks and days under the dimension TIME.
·         Hierarchies: How dimensions are structured. A dimension can have several hierarchies because the same data can be organized according to different rules.
·         Levels: Levels are like a zoom in and out tolls, they help the user to drill down and up in detail.
·         Measures: Measures, although a component of OLAP, are a specific dimension of the OLAP-Cubes that consists of the output the user wants to know, usually a numerical indicator.
Let’s consider the following example in order to understand each of the principles explained:
“A business analyst, working for a car dealer, wants to know the net sales and the gross sales in Germany during the first quarter of the current year.”
For this he needs to use the Sales OLAP cube that, for this purpose, has the following structure:

Dimensions
Hierarchies
Levels
Members (Examples)
Operational
Country
Country
Germany
Dealer
Make
Mazda
Stand
Major MAZDA, Inc
Time
Calendar/Standard Time
Year
2012
Quarter
Month
March
Day
3rd
Week Time
Year
2011
Week
25
Day
Monday
Measures
Measures
Measures Level
Net Sales; Gross Sales

 

 

In the example depicted the user could use the shaded blue parameters in order to query the OLAP cube. This is not the unique configuration to get this information, for example, to limit the sales to the first semester, instead of choosing the 1st and 2nd quarter, one could use the Month hierarchy, by drilling down from the Quarter hierarchy, and then select from January to June.
Organizations usually work with various cubes like stocks, prices, people, etc, but what is important to understand is how dimensions, hierarchies and levels work together to deliver the measures you are looking for on your reports.
OLAP is considered by Lloyd (2011) as one of the 4 components of Business Intelligence and this technology is usually present in the various BI solutions available in the market. One of them is the SQL Server from Microsoft that guarantees OLAP through the Analysis Services. There are various ways to build a reporting tool on top of this OLAP solution, especially through the use of Excel. This will be the theme of the next post where the reader will finally be able to know everything he needs to decide how he must build his reporting tool. 

Sunday, October 28, 2012

Direct Mail TYPO3 "content could not be fetched" solution

Possible solution for "The plain text content could not be fetched.The HTML content could not be fetched."

TYPO3 is a good CMS and a lot of people seem to be using it. One famous and handy extension is the direct_mail which allows, among other things, to send newsletters.

Recently I was trying to add this extension to my TYPO3 instance but when I tried to make a newsletter based on a page, internal or external, I would always get "HTML content could not be fetched".

There seems to be multiple reasons for this issue but I'll focus on the one that I had and required code debugging.

I was using the latest direct_email version 3.02, had "allow_URL_fopen=ON" and "use CURL"  activated on TYPO3’S install tool. If you want to change allow_URL_fopen do it in you php.ini.

My solution for the direct_mail extension "content could not be fetched" issue was

adding the IP mapping to the /etc/hosts file on the server...just wrote 192.168.101.31 my domain on /etc/hosts file....the IP was the server's network IP.
/etc/hosts
server_ip             mydomain.com

Why is this necessary:

direct_mail extension uses CURL to fetch the newsletter's page and as such it tries to connect to an address.and if the IP machine isn't mapped on the hosts file it will fail.

What I don't get is why direct_mail doesn t provide more info besides " The plain text content could not be fetched. The HTML content could not be fetched"

If you have any other solutions for this issue or if this didn't help let me know by adding coments.

Wednesday, October 3, 2012

Push notification to Web client


Due to the nature of the HTTP protocol pushing data to the client isn't a trivial thing. The flow of web communications is request-response, meaning that the server has a hard time sending data if there isn't a request from the client.
Server notifications where the client doesn't need to bee constantly asking if there are new events (polling) already exists in mobile OS, HTML5 Web Socket API and Server Sent Event. But until there is a higher implementation of this standard among the different browsers, developers still have to come up with some elaborated schemes to provide real time updates without constant page refresh.
This post presents a server notification approach in AJAX and using a Java Servlet environment.

There are multiple ways to accomplish this notification paradigm using AJAX but we'll be focusing on a low level solution based on long pulling.

Long Pulling

Long pulling, aka COMET, it's a very easy concept to understand but difficult to implement.

When a browser makes a request to a server, a HTTP connection is created between both. Normally the server replies immediately with data, for instance html content, and the connection closes. But if we could hold that request and only reply when there was something new then we would have a notification system in place.

For example, let's assume we wanted to track a football match and the user should be notified when a new game event happened. Now, we could periodically request the game state to the server, aka polling, or we could just ask once and the reply would only be sent when there was a new event, aka long pulling. This has two inconvenients, first how can the server successfully hold the request keeping the connection alive and what about request timeouts.

Holding the connection (sleep or wait):

The easiest method to keep the connection alive is by simply putting a sleep(time) within a while. This way you could hang the request while there isn't anything new and keep checking from time to time. This is a possibility but I personally prefer using .wait() because you don't need to set a time and you synchronize over an object and not a thread.

How wait() works:

What wait does is hold the thread execution until another thread sends a notification to resume. This is kinda what we want, the thread in charge of the HTTP request waits until the server has a new notification.
As sleep, wait also needs to be within a while loop because there can be uncontrollable interrupts that break a thread's waiting state. Another requirement of wait is the present thread owning the object on which wait is evoked. This basically means that only the calling thread has access to the object at the time of evocation, in other words wait is inside a synchronize block.

Notification Architecture Example:

Going back to the football example the following diagram illustrates a possible architecture of the notification system:

The "Thread Match Watcher" is responsible for monitoring the game status and every time there is an event, like goal or penalty, it will add to the events queue and notify every request thread that is waiting for new events. This notification is done by invoking notif.notifyAll().
Each client request has an event_id associated with it, which if it's the first request will start with 0, and if there is a queue entry on index = event_id then it will be immediately returned along with the remaining ones. The event_id will be incremented to match last_event_returned_index+1 and the client will make a consecutive request. If there aren't new events the Thread responsible for the request will invoke notif.wait() and waits on the notif object until a notification comes.

Requests timeouts won't be any trouble because we just need to issue a new request from the client, using the same event_id so we don't miss anything.

Obviously things might bot be as simple as this due to the issue of allocating a thread to a request...if there are thousands of threads waiting for notifications then resource management might became a problem. This can be resolved by setting a timeout variable that is set to true every 30 mins and invoking a notifyAll() on the notif object. This way all the request will be dropped and only the still active clients will re-request the new events.

Pseudo code:


/* request Thread: if there wasn't any new event */
synchronized(notif)
{
     while(!timeIsUp)
    {
        notif.wait();
        /*this will run on notif.notifyAll() or uncontrollable interrupt*/
        if(event_id<=eventsQueue.size()-1 && eventsQueue.get(event_id)!=null)
        {
             /* write response*/
             resp.getWriter().print(eventQueue.get(event_id));
             timeIsUp = true;
        }
    }
}

/* watcher Thread */
while(gameIsRunning)
{
        GameEvent event= getGameEvent();
        eventQueue.add(event);
        synchronized(notif){
           notif.notifyAll();
        }
}

Wednesday, August 15, 2012

Building a Reporting Tool on Top of Analysis Services: Introduction to Reporting


The information paradigm in the business world has shifted. Nowadays the problem is not the lack of data, but its surplus. Organizations have invested in expensive information systems that are able to feed, on an almost daily base, executives with the sales, stocks and costs figures. Now the question is not how much we are selling or spending but what can we do with the data we have. In other words: “How can we transform data into information and information into knowledge that supports business decisions?”
Data by itself isn't enough to make decisions, it needs interpretation and context. Only after selecting and organizing the right data can we start to produce information and establish the foundation for knowledge. We are now entering the realm of big words like Business Intelligence, Data Mining, OLAP, etc, and although these are very interesting subjects, the next few posts focus on a specific component of BI: Reporting.


1.1       What is Reporting?

According to Ranjan (2008) reporting consists in providing aggregated views of data in order to maintain its users informed about the business performance of their organization. A reporting tool is a graphical interface that allows the user to construct the queries needed to a business report and then build the document presentation. The tool must mask the technical aspects of the query language and allow the management and reuse of periodical reports (Loshin, 2003).
In light of the previous definition, Excel is a reporting tool, since it allows connection to data bases, enables data import and, as a spread sheet software, has all the tools needed to produce professional reports. Although necessary, these characteristics aren’t enough to fulfill all the requirements of a reporting tool.

1.2       What features should a reporting tool have?

The features that follow are proposed by Kumar (s.d.) and partially considered in (Hagerty, et al., 2012) as the criteria to decide between the different BI solutions in the market.
Connection to Data Bases: The reporting tool compatibility with different types of data bases is an important factor.
Schedule and Distribution: There are reports that take considerable time to process and therefore justify scheduling updates. The distribution channels are a big part of the reporting tool and a crucial aspect for its success since they will directly interfere with content management.
Security: In the business world you need to have credentials management since different users have different access to the data base.
Customization: Static reports are rarely a good idea. Users want the ability to change and adapt the reports to their personal needs. The degree of customization is one of the most important aspects when choosing or building a reporting tool.
Export Capabilities: The most common needs consist in exporting to Excel, PDF and non-property files.
Office Integration; The Microsoft’s suite leads the market of productive software and its integration with a reporting tool is usually valued.

1.3       When should a reporting tool be bought or built?

A company doesn't need to buy an entire BI platform, instead it can acquire only the features that are worth the investment and develop the rest in house. Beside the lower costs, this solution has the advantage of creating and keeping behind doors the knowledge needed to improve and adapt the tools to the company’s necessities. However this option isn't advisable for crucial IT infrastructures, like the data warehouse and OLAP systems.
In order to know if a reporting tool should be bought or built, the following factors must be considered:
Number of Reports and Users: If the reporting tool will be used by a big number of people that will request a large amount of reports, then going for a commercial version may be the right decision. This assures better support, a more solid platform and takes advantage of the experience of someone who has already built a lot of reporting tools.
Distribution Channels: If the reports are going to be shared by email or local networks then it’s reasonable to develop the reporting tool. If, on the other hand, content management and synchronized distribution is a major concern then buying makes more sense, since this usually implicates integration with other software.    
Ad hoc Reports: If users must be able to build their own reports then it’s advisable to buy the reporting tool, since the providers of this kind of solutions are more experienced and can predict the most commonly asked features.
Not all reporting tools have a price tag, some are free, but these are usually more limited, don’t support integration with the most common data bases platforms and most come in the form of add-ins to paid software like Microsoft Excel. Free reporting tools should be seen as an additional aid to your reporting repertoire that may provide advance and more specific functions like data mining, reporting templates, etc.  

1.4       Conclusions

Reporting can be a very demanding task according to the business’s complexity it refers to. Furthermore, its boundaries are not always well defined and often business analysts see their tools come short in features needed to support their functions.
It becomes now clear that a reporting tool depends on a big number of variables, from the specificities of the data bases it connects to, to the portfolio of features it must have to satisfy the users’ needs. This usually leads to two things: the reporting tool is acquired as part of a Business Intelligence platform and therefore the integration with other software is assured, or the reporting tool is built and customized to the specific needs of its users. This last option is going to be addressed in the next posts and the reader will gather the basic knowledge to build a reporting tool with the use of Excel. 

Bibliography


Hagerty, J., Sallam, R. L. & Richardson, J., 2012. Magic Quadrant for Business Intelligence, s.l.: Gartner.
Kumar, R., n.d. Reporting Solution, s.l.: Binary Semantics.
Loshin, D., 2003. Business Intelligence: The Savvy Manager's Guide. s.l.:Morgan Kaufmann Publishers .
Ranjan, J., 2008. Business justification with Business Intelligence. VINE, Vol.38, pp. 461-475.

Tuesday, July 17, 2012

jBPM application example demo

This is a DEMO of what is possible to do by embedding jBPM in a Centralized System to execute workflow/business processes.

What the system "basically" does is controlling media equipment to "convert" files.

It's a Web Application that communicates with jBoss Web Server, where jBPM and Drools Guvnor are running.

jBPM was used to design and run workflow. During execution tasks are assigned to the media equipment. To interact with the network resources a SOAP layer is used and load Balancing for parallel tasks is consider. The demo has three "sections":
  • Adding service tasks to be used during business process design.
  • Business process design with jBPM Web Designer
  • Workflow execution and progress notifications.
The following list pairs demo features with the jBPM/Drool Guvnors components used during implementation:
Because the three first items were already covered in a previous post I'll start with Asynchronous handlers.

jBPM Task execution with Asynchronous handlers

There really isn't much to it. To execute domain specific tasks, those that run custom code, jBPM uses the notion of WorkItemHandler which is an interface with two methods
executeWorkItem(WorkItem arg0, WorkItemManager arg1) and abortWorkItem(WorkItem arg0, WorkItemManager arg1). To run asynchronous tasks the developer just needs to use a thread to implement the executeWorkItem method. There is one thing to keep in mind though, jBPM needs to be told when a service task has been completed through the method WorkItemManger.completeWorkItem(long workItemHandlerId,Map result). Due to this method arguments, the thread must have access to the WorkItemManager and the respective WorkItemHandler Id and needs to call it upon completion. The easiest way to do this might be extending Thread in the class that implements WorkItemHandler. Illustration code follows:


public class ActivityHandler implements WorkItemHandler {

 @Override
 public void executeWorkItem(WorkItem arg0, WorkItemManager arg1) {
    new WorkItemExecute(arg0,arg1).start();
 }
}

public class WorkItemExecuter extends Thread{

  private WorkItem workItem;
  private WorkItemManager workItemManager;
  public WorkItemExecuter(WorkItem arg0,WorkItemManager arg1)
  {
     this.workItemManager = arg1;
     this.workItem = arg0;
  }

 @Override
 public void run() {
    /* Execute the task 
     * ....*/
  
    /* When finished warn jBPM the task is completed */
    this.workItemManager.completeWorkItem(this.workItem.getId(), null);
 }
}

  

Notify user about workflow events

The notifications to the user are made by implementing the ProcessEventListener interface. Whenever events like "before trigger node" and "event completed" are fired the corresponding interface method is called and from there it's possible to notify the user about workflow evolution or request some input needed for a certain task. The workflow event architecture was something on the lines:
  • beforeNodeTriggered was used to check if the node's  "requirements" were fulfilled. For instance the two parallel activities needed some input and the Asset Explorer is called so video "assets could be injected" to the tasks.
  •  afterProcessCompleted fires the last notification informing the user that the workflow has ended.
Getting the information from the jBPM regarding workflow progress is quite easy when using the ProcessEventListener interface, what isn't as trivial is notifying the user in real time about those events due to  HTTP  request/response nature. Considering that a workflow might take hours to complete managing a connection to the client so it's possible to post notification requires some effort. If you can easily push data to your interface environment then this won't be an issue but in the html (Web Socketless) realm it's quite a nagging problem.

Share data between activities automatically

For what I could understand in order to share data between processes' tasks the user must set data association during the BPMN2.0 design process. This assures for example that WorkItems are as independent from processes as possible but on the other hand demands a user interaction that might not be very intuitive. Thus, in order to map inputs and ouputs between processes tasks automatically you can use the org.drools.definition.process.Node and org.drools.definition.process.Connection. These two classes allow you to "navigate" through the workflow and find each tasks' dependencies, so you can map the output of one task as the input of the next one. Be aware of the (WorkflowProcess) cast. Code follows:

 KnowledgeBase kbase = kbuilder.newKnowledgeBase();
 WorkflowProcess workflow = (WorkflowProcess)kbase.getProcess(processId);
 for(Node node:workflow.getNodes())
 {
  if(node instanceof WorkItemNode)
  {
   for(String key:node.getOutgoingConnections().keySet())
    markOutInputs(node.getOutgoingConnections().get(key));
  }
 }

 private void markOutInputs(List list) {
  
  for(Connection conn:list)
  {
   Node to = conn.getTo();
   /* only reachable workitemnodes will be affected */
   if(to instanceof WorkItemNode)
   {
    /* The connection ends in a WorkItemNode */
    /* handle data input mapping */
   }
   else
    for(String key:to.getOutgoingConnections().keySet())
     markOutInputs(to.getOutgoingConnections().get(key));
  }
 }


  

Tuesday, May 8, 2012

BPMS (Business Process Management System)

This post introduces the following BPMS:

  1. JBPM
  2. Activiti
  3. Apache ODE
  4. Intalio
  5. Bonita


1. JBPM

“jBPM is a flexible Business Process Management (BPM) Suite. It makes the bridge between business analysts and developers. Traditional BPM engines have a focus that is limited to non-technical people only. jBPM has a dual focus: it offers process management features in a way that both business users and developers like it” (jBPM) and is written in Java. “The jBPM project has merged with the JBoss Drools project (an open source business rule management
framework) and replaced Drools Flow as the rule flow language for the Drools framework” (Liempd).
In a nutshell, jBPM main functionality is modeling an executing a business process using BPMN2.0.
            In order to define the business process this framework provides four main components:
·         Eclipse Editor is a plugin for Eclipse which adds a designer to graphically define the business process
·         A web based designer, similar to the eclipse editor but for web browsers. An example can be seen in the image bellow.
·         jBPM console which allows the business users to inspect and control the process state.
·         A REST API to interact with the engine.


JBPM Web Designer


A business process is made of nodes which are connected using sequence flows. BPMN 2.0, which is used by jBPM, defines the following types of nodes:
·         Events:  They are used to represent the different kind of events included in the business process. Could be a start event or an end event, for example.
·         Actions: Are responsible for defining the actions to be performed during the process execution. This is where the actual work of the workflow is represented, for example a human task or sending an automated email to users. Actions can be nested within other actions.
·         Gateways: Are used to define multiple paths in the workflow.
            In the following picture it’s possible to see the Events(1), Actions(2) and Gateways(3).


JBPM Eclipse Designer Plugin
How are business process implemented in jBPM? First a process description must be implemented, either through the designers as in Figure 8, or writing the xml directly. Second jBPM needs to know the process descriptions and for that there is an API component called KnowledgeBase which is responsible for maintaining all the process definitions executed by a session. The Session is the connection between the process description, included in the KnowledgeBase, and the engine. Actions that interact with the process instance, like aborting the workflow, are defined in the session interface. With only these two components is possible to run a business process, but jBPM also has the option of implementing listeners to events, which can occurred at any stage of the workflow.

2. Activiti

“The Activiti project was started in 2010 by the former founder and the core developer of jBPM (JBoss BPM), respectively Tom Baeyens and Joram Barrez. The goal of the Activiti project is crystal clear: built a rock-solid open source BPMN 2.0 process” (Liempd). Due to this fact there many similarities with jBoss but some implementation choices make Activiti different enough to be a serious competitor.
            As with jBPM, Activiti also has an Eclipse-plugin to graphically design the business process. It has a web based application called Activiti Explorer which allows users to monitor and control business process but doesn’t provide features to design one. A REST API is also available to support engine related functions from external tools, but is still in experimental phase, meaning “should not be considered stable”(Activiti).
            One obvious difference is the web based designer. Although jBPM has a web designer out of the box Activiti relies on an external solution, from Signavio, forcing extra configuration and installation to make it work. This component name is Activiti Modeler.
            A summary of the Activiti tool stack is visible bellow.

Activiti tool stack
To compare with jBPM, the following picture shows the graphical design plugin for Eclipse. As BPMN defines the visual representation the result is similar to the one in  JBPM Eclipse Designer Plugin.

Activiti Eclipse Designer
In the picture bellow  is visible a business process modeled using the web designer by Signavio, configured to work with Activiti.


Signavio designer customized for Activiti

How are business processes implemented in Activiti? As with jBPM, the workflow engine needs a business process description which can be defined using the various components, might be through the Eclipse Designer, writing the bpmn2.0 xml directly or with the Activiti Modeler. Afterwards the model must be deployed using using the Activiti Explorer or Activiti’s API.
            The Activiti’s API has 2 main components, the ProcessEngine and the Services. The ProcessEngine is responsible for running the process business and exposing the Services, which are an interface to interact with the Engine. While in jBPM there was a component called Session which was responsible for all the interactions with the engine, in Activiti those methods are divided by Services. For example if a developer wants to run a business process he must get the RunTimeService from the ProcessEngine. The same thing happens with the deployment actions, which is made using the RepositoryService returned by the ProcessEngine. With this structure the methods are more fragmented and not as focused on a single component as in jBPM.


3. Apache ODE


This open source solution from Apache is quite different from the ones analyzed so far. Mainly because, instead of BPMN2.0, Apache ODE supports the WS-BPEL description, meaning there isn’t a standardize graphic representation of the business process. As such, the main focus of Apache ODE is the business process execution through the interaction of web services. While Activiti and jBPM have a complex API, the Apache solution relies on the execution description of BPEL, by running the process in conjunction with the data received through web services. This results in a far simpler solution, installation wised, but the interaction is not as strait forward.

            How are business processes implemented in Apache ODE? As with the other Workflow Engines the first step is describing the business process, might be using the Eclipse BPEL designer Plugin, like in the picture bellow, or writing the xml directly. Then the web services interface must be defined where the interaction and evolution of the business process is exposed. Using the WSDL Editor in Eclipse is an easy way to do it, as in the second picture. The components defined using the web services description language are the binding between the port where the process service is available, the process itself and the service. The final stage is the deployment descriptor, which is responsible, among other things, to connect the interface with the client.
            ”The deploy.xml file configures one or several processes to use specific services. For each process, deploy.xml must supply binding information for partner links to concrete WSDL services” (Apache).

Business Process in Eclipse BPEL Editor

WSDL using Eclipse WSDL plugin
The web based tool for monitoring the processes is illustrated next. It gives an overview of the amount of processes running, displays the .wsdl of each service, has a deployment feature to start new business processes and provides some management hover the running workflow.


Apache ODE web tool

4. Intalio


“Intalio|BPMS is the world's most widely deployed Business Process Management System (BPMS). Designed around the open source Eclipse BPMN Modeler, Apache ODE BPEL engine, and Tempo WS-Human Task service developed by Intalio, it can support any processes, small or large.” (Intalio).
            Intalio is the most commercially driven solution of the ones analyzed so far. In fact, if the free version is chosen only 80% of the source code is available and other features like DBMS compatibility are fewer than in the paid option.
            The software is divided in two components, the server and the designer plugin for eclipse. The server package has the actual business process engine which is the Apache ODE. A web app to interact with it, much like the Apache solution, can be seen in the picture bellow. A nice detail is the language localization which detects the user idiom, setting it, in this case, to Portuguese.
            While jBPM, Activiti and Apache ODE are based only in one language, BPMN 2.0 or BPEL, Intalio uses two. For the graphical design it uses BPMN, but for execution Intalio converts it to BPEL. How this conversion is preformed is not documented and because there isn’t any standard for it, figuring how it’s made is not trivial. The BPEL code is necessary because Intalio uses the Apache ODE engine, but adds a layer of BPMN with the designer.

Intalio process web app
How are business processes implemented in Intalio? Intalio relies heavily on the designer when describing the business process. Even the web services layer, as Apache ODE also has, is done graphically. This would be a great way to simplify things but the designer isn’t as well developed as the others used in the other suites. For instance while undoing a considerable amount of actions made using the eclipse plugin, some elements appeared even if not added previously, connection arrows, empty tasks, among others. Nevertheless, after having the business process defined using BPMN, the interface layers must be added. For example, if the user wants to monitor the process using the Intalio|BPMS console a new layer must be added to the design. The same for exposing the business process through web services. To better understand how all is put together an example is show in the next picture, with the interface for the console in the top layer and the web services interactions in the lower one. The binding for how the messages are exchanged is defined using a XML Schema. After all these artifacts are correctly implemented the process is ready for deployment and because the execution engine is Apache ODE this phase is similar to the one described in the Apache solution.


Intalio Designer


5. Bonita

The most user friendly and intuitive solution analyzed in this section, being clearly more focused on the user than the developer. As such, it’s very easy to quickly design and run a business process but implementing advanced external communication is not very strait forward. Proving is the fact that “BOS [Bonita Software] does not yet provide a public API to build processes so you'll have to develop them using the studio. So no possibility using a web browser yet” (Bouquet).
            The studio, shown in the picture bellow, is a designer where all the business process implementation takes place. Bonita’s approach with the studio is to define all the process’s aspects through graphical elements, even implementing web services is made using components called connectors. By having this structure it’s very easy for a process Engineer to describe and run a business process but at the same time it also limits the possibility of expanding its functionalities, as seen with the constraint of using a customized web designer.

BonitaSoftware Studio

In order to monitor and control de running business processes BonitaSoftware has the User Experience mode, visible in the next picture, which is web based and has an email like layout.
BonitaSoftware User Experience

How are business processes implemented in BonitaSoftware? There really isn’t much to it, just design the business process or import one, which can be in BPMN 2.0,XPDL or jPDL, and run the process, which will then be available through the User Experience application. All these actions are made through the self-explanatory GUI.

Referencies: Liempd (Activiti in Action - Book)