Tuesday, November 11, 2014

CXF Transformations Revisited: Usage within a Camel Route
The CXF transformation feature provides a nice way to declaratively specify certain simple transformations – e.g. replace element names, add elements, change namespaces, etc. My previous blog post described this mechanism in detail.
This feature makes it easy to describe the transformation in a map in Spring XML. The ease-of- use in declaring a transformation with XML makes it a very appealing solution. Such an easy, declarative mechanism for performing transformations is not available in Camel.
The out-of-the-box use of CXF transformation feature in a Camel route is restrictive because the transformation is tied to the CXF endpoint, so the transformations defined cannot be applied to messages at any arbitrary location within a route’s pipeline. 
The purpose of this investigation was to determine whether the API that implements the CXF transformation feature could be de-coupled from CXF and re-used from within a custom Camel processor to perform transformations on messages at arbitrary locations within a route.
A prototype was written that proves it is indeed possible to use the CXF transformation feature to define transformation mappings (via XML) using the same syntax as the CXF feature. The transformation mapping is assigned to a generic processing bean. The processing bean can then be inserted at any location within your camel route.
To implement this pattern, you will define the transformation in your Spring XML (‘camel-context.xml’) using the transformation mapping syntax. The mapping language is the same as defined here:

An example of how this would look in Spring is shown below:
This bean declared above is an instance of the class “gov.foo.services.enterprise.deqy.transformer.TransformFeature”. This java class has several properties (expressed as maps) that will hold the logic required for the transformation.
The following code snippet lists the implementation of the underlying java class represented by the bean declared above.

The “process()” method (line 20) will get called as the message passes through the bean. The code will pull out the body of the message as an XMLStreamReader (line 22-23). Notice line 27 calls the “org.apache.cxf.staxutils.transform.InTransformReader” constructor with the appropriate maps.  This is where the code re-uses the CXF transformation implementation. The XMLStreamReader that is returned has been transformed by the CXF API. This is then set as the body of the message that gets passed along in the route. 
The following Camel route ‘camel-context.xml’ expressed in Spring XML, puts it all together and shows how the transformation can be placed into an arbitrary location in any route. This route will pick up a file from ‘inbox’ directory, pass along the content of the file, which is sent to the processing bean that performs the transformation defined in the same Spring file (shown by the mapping code shown previously.)  The content of the transformed message will then go to another file component that will write the content to a file in the outbox ‘directory’.

The above Camel route ‘camel-context.xml’ expressed in Spring XML, puts it all together and shows how the transformation can be placed into an arbitrary location in any route. This route (lines 54-58) contains the processing bean (line 56) that performs the transformation.

Overview of CXF Transformation Feature
The CXF Transformation feature allows you to declaratively define a transformation to change namespaces or append/change/drop elements. The transformation is very much tied to the CXFEndpoint.

Intended Use-cases 
The use-cases for which the CXF transformation feature works well are the following use-cases.
  1. Transforming Incoming Requests from clients - A route exposes an interface to clients via a WSDL.  Some clients may not be capable of sending messages (or receive response) that adhere to the interface defined by the route’s WSDL. The CXF transform feature will allow you to define transformations on the CXFEndpoint so that incoming non-conforming messages from clients can be transformed and “fixed” to adhere to the expected interface defined by the WSDL. This allows clients to send non-conforming messages to the route (e.g. incorrect namespace or element names) – messages that would otherwise cause an error when sent to the route - and you can define the appropriate transformation on the CXFEndpoint to “fix” the incoming requests to conform to WSDL, and likewise modify outgoing (well-formed) response to something client expects.
  2.  Transforming Outgoing Requests to backend - A route sends requests to backend service that are well-formed according to a specific WSDL, but the backend service expects some tweak to what the route is sending. This could apply for example, in the case where the route’s CXFEndpoint (client) is ahead of the backend service on changes to element names or namespaces that adhere to an enterprise standard. At some point the backend will be updated to accept these changes. However until the backend service is conformant, the transformation applied to the CXFEndpoint (client) in the route will tweak the outgoing messages so that they can be understood by the backend service, and likewise the responses from the backend service can be transformed as needed into the well-formed format according to the enterprise WSDL.
Transforming Incoming Requests from clients
Let's say we want our route to expose the interface defined by a new "enterprise" WSDL, and we want to go ahead and deploy that. Even if there are clients that are not ready to adhere to the new WSDL, clients can continue to send requests that adhere to the "old" WSDL (e.g. operation names, different namespace, different element names, etc.)  Existing clients can continue to send requests containing non-conforming messages.
In our new route, we just need to include a mapping in the spring file that defines the transformation on incoming messages. So as long as we add the transformation rules into our route, it is able continue to service clients that don't necessarily adhere to the new WSDL, and (if needed) the route will modify the messages as defined in the transformation.
Below shows the Spring configuration a route that exposes the interface defined in ‘deqy.wsdl’.  Imagine this is our “new” interface. If there are clients that must continue to call our route that can’t change to the new WSDL, then the route can be configured (via the Transformation mapping) to accommodate those existing clients.

The above code snippet is a ‘camel-context.xml’ (Spring configuration) for a Camel route that listens on a CXF endpoint. The CXF endpoint points to a WSDL that defines the route’s interface. The route includes a CXF transformation map that declaratively defines mappings that should be done to non-conforming messages as they enter the route. Note the mappings are defined in such a way that they won’t affect messages that do adhere to the WSDL.
The above Spring defines a route starting at (line 43) that begins with a CXF endpoint (listener). The CXF endpoint is defined at lines 32-38 with the transform feature being configured starting at line 35.  The individual transformations are defined at lines (20-29.)
As you can see it is pretty straightforward to configure a route to accommodate for payload transformation to be applied to incoming messages. Likewise it is just as easy to apply a transformation for the corresponding response back to client that will convert the compliant response from the route into a (potentially non-conformant) response that the client expects.
Transforming Outgoing Requests to backend
For a scenario where a route sends a request to a backend server, you may have written your route CXFEndpoint (client) that adheres to a particular WSDL but the backend service for some reason or another needs the message to be tweaked slightly.  To apply a transformation to a request message before it is written to the wire and a response before it gets dispatched to the route you can use an outgoing transform.
The following code shows the implementation of an outgoing transform feature.
Lines 29-42 define the transform mapping for ‘outTranformElements’ – the mapping on requests as they are sent from route to the backend service, followed by ‘inTransformElements’ – the mapping on response before the message re-enters the route.
See lines 44-51 for the CXFEndpoint ‘deqyService’, where the transform feature is applied (lines 48-50).
The route starts at line 54. The route reads a message from a file in ‘inbox’ directory, and sends it to the backend service. The route receives the response, and writes response to a file in ‘outbox’ directory.
Consider a test where the message content of the file dropped into ‘inbox’ conforms to the  WSDL defined on the outgoing CXFEndpoint – i.e. ‘deqy.wsdl’.    Imagine the backend service requires a tweak to a well-structured request. For example the namespace the backend service expects in incoming request and writes to response is different from what is defined in the WSDL. We can apply any such tweak required by backend service using a transform defined on the endpoint.  In this scenario the well-formed message we drop into inbox will get transformed just before it is sent to the backend service based on the transform defined in ‘outTranformElements’. Likewise, the response from backend service gets transformed based on the ‘inTransformElements’ before the response message re-enters the route.
A Workable Solution using CXF transform Feature
The following diagram shows how a message flows through a route that consists of incoming and outgoing CXF Endpoints. Both incoming and outgoing CXF endpoints have transforms defined.  The location in the flow where the transforms are executed is shown in the diagram.
Consider a client that is written based on the interface defined by Service ‘A’ WSDL.  The client sends a request to the route using the format based on Service ‘A’ WSDL. The incoming ‘inTransform’ (see position 1 in diagram) will convert messages from the (non-conforming) format sent by clients, into a format adhering to the Service ‘B’ WSDL. Without this transform, the incorrectly formatted message sent from client would cause an operation/binding mis-match when it hit the route’s CXFEndpoint that begins the route, and this would cause the request to fail on entry to the route.
The message continues through the route, through Processor A and B and then reaches the consumer CXFEndpoint. This CXFEndpoint will call the backend Service ‘B’. The format of the message is correct (adheres to Service ‘B’ WSDL) and Service ‘B’ will respond with correctly formatted message. In this solution, the ‘outTransform’ and ‘inTransform’ at position 2 and 3 are not required.
The response from Service ‘B’ flows back into the route, and the response goes through Processor ‘C’ where a camel-based transformation (e.g. xquery, XSLT, java code) can be used to transform/enrich the response in any way required. This transformed message can now be returned back to client. The ‘outTransform’ (position 4) is not necessary in this solution.
Using CXF in Generic Provider Mode
One aspect of the CXF transformation feature that is problematic is the requirement that you are restricted to performing transformations that convert message into something that conforms with the WSDL the endpoint is tied to. If the message is transformed in a way that makes it non-conformant, an error is thrown when it reaches the endpoint.
In order to make the use of CXF transform easier within Camel, there is a way to disable the requirement that the incoming message match the operation/binding of the associated CXFEndpoint. That way, if this message-level validation is disabled on the Endpoint you can freely apply tranformations to any format at the endpoint and it will continue to pass through the endpoint without failure.
You should be careful when over-riding this as the purpose of a WSDL is to do the operation checking and some special handling based on the determined operation and also to provide a way for the consumer to get the typed interface information (i.e., retrieving the WSDL from the endpoint). By disabling, you will do neither of these, so you could just simply set up your endpoint using the generic provider mode (i.e., omitting the wsdlURL and serviceClass properties of the endpoint).
In this case, the message will be transformed using CXF's transform feature and directly transferred to your camel route where you can do further modification necessary and set the operationName header to match the operation at the outbound endpoint
In this way, you can allow messages to be transformed using CXF transform feature without the problem of adhering to a particular interface.
The transformation capabilities of the CXF transform mapping language are limited. The transform language works well for minimal changes such as element name changes or namespace changes. More involved e.g. structural changes to complex types or multi-faceted elements are not easily implemented using the mapping language.
A CXF transform feature is applied to a specific CXFEndpoint in a route. Because the transform is tightly tied to the specific CXFEndpoint, it cannot be used to apply a transformation at random spots on a message flow through the route. This limits the usage of the CXF transform feature as a general mapping tool for transformations within a camel route.
In order to make the CXF transform feature a useful technique for performing general transformations to messages as they flow through a route, you will likely need to disable the opearation validation of messages at the CXFEndpoint and use the generic provider mode. This will allow for further modifications in the transform feature than would normally be possible if you used the default strictly-typed message checks that typically are part of the message entry into the route.
Configuration model for “Environment Aware” Routes 

This document will describe a configuration mechanism for Fuse routes that will simplify deployment. No longer will you need to prompt installer for environment, and update properties in the install script. Instead, the configuration file (.cfg) contains property values  for every environment. The route only reads the properties associated with the environment in which the route runs. This configuration model brings with it the following unique characteristics: 
  • Properties file contains values of ALL properties for ALL environments.
  • Deployment no longer needs to involve logic to set environment-specific property values.

By limiting the logic and user interaction of the deployment script, there is less chance of making an error in the script. Also, there is a reduction in (scripting) code that must be maintained. Environment specific values are no longer hard-coded into the script, rather these properties have been set up-front in static configuration (.cfg) file(s) that gets deployed with the route(s).

Version Info
This document was written based on JBoss Fuse 6.x and applies to bundles running inside this version of the JBoss Fuse container.

Sample Property file
Let’s take a look at how a property file (.cfg) will look using this model. Notice the file contains properties for every environment (DEV, VAL, INT, PRD, etc.). Each property contains a special identifying prefix to distinguish it as belonging to a specific environment.

The above properties are written in such a way that they are “environment aware” and only the property for the environment Fuse is running in is picked up. The rest of the properties are ignored. Having all properties for all environments may seem excessive, however it could be argued that having property values for every environment available in the configuration file as a reference is useful.
How does Fuse know what environment it is running in?
Fuse will need to be configured to “know” what environment it is running in.  You will need to do a one-time manual edit of a specific Fuse configuration file.
Inside the Fuse configuration file ‘$FUSE_INSTALL\etc\system.propeties’ you can add the following entries.

The two added entries are ‘karaf.environment’ and ‘karaf.instance’ (lines 37 and 38.) Of course, the values of these new system properties should be appropriate for your environment and instance. You may only have a single machine per-environment, in that case, ‘karaf.environment’ is the only system property that needs to be defined.  Add ‘karaf.instance’ if a environments have 2 or more machines, and property values may differ depending on the particular machine (instance) within a given environment. You will need to restart Fuse in order for these environment indicator properties to be available to your bundle’s configuration.
Once Fuse is restarted with the above system properties set, the following entries added to your bundle’s configuration file will now have special meaning (see lines 3 and 4 below).

The configuration file specifies the values of properties by using nested system properties to identify the environment. For example line 3 above the value of ‘to.uri’ is defined by using a nested system property ‘karaf.environment.’



 The ‘${karaf.environment}’ system property (defined in the Fuse ‘system.properties’ file) will resolve at runtime to the environment name e.g. ‘DEV’, and then that is used to designate the prefix for the property i.e. ‘${DEV.to.uri}’. This then resolves to the correct configuration property found further down in the configuration file (see line 7 in snippet from configuration file.) In this example, the true value of ‘to.uri’ becomes the value of the ‘DEV’-prefixed version of this property.
Properties for all environments where the route will be installed will be included in the configuration file. The correct prefix for each environment will match 'karaf.environment' used for the environment and should be specified exactly as it appears in the ‘system.properties’ file of the container in which it appears.) For example: DEV, VAL, INT, PROD, etc.

Using the properties in the Camel Context (Spring XML configuration)
Because the use of the nested properties is confined to the configuration file only, there is nothing special that needs to be done in the Spring configuration for your route in order to use this configuration model.
The spring configuration will refer to the normal (non-prefixed) version of the properties. The environment-specific resolution is confined to the property file,  and does not need to be a concern within the Spring XML of your route.
See lines 27-31 where property placeholders are used as values of properties injected into a Spring bean. See line 37 where the value of a property placeholder is used within the camel route. (Notice the need for double brackets ‘{{‘ and ‘}}’ to de-reference such properties from within the camel route. )

In summary, the use of env-specific properties within your configuration will not affect your route’s code. 

Monday, September 29, 2014

How do I configure Red Hat JBoss Fuse to send an Email when specific logging event occurs?

In the post I will describe how to configure Fuse ESB to automatically send emails when certain logging events occur. This technique uses Log4j’s SMTPAppender.

Log4j can be configured to push email notification immediately when an ERROR occurs inside the ESB.
The configuration of the SMTPAppender within Fuse is not particularly straight forward. There are some gotchas that I have attempted to point out here. Perhaps future versions of Fuse will make it easier to configure this.

These instructions are written based on Fuse version 6.0.0.redhat-024 available for download in platform independent form here (download zip file):

Or, assuming you have maven installed, the following command will download the installation .zip file, and put the file into your local maven repository:
mvn org.apache.maven.plugins:maven-dependency-plugin:2.8:get -Dartifact=org.jboss.fuse:jboss-fuse:6.0.0.redhat-024:zip

Some of the complications with getting this to work are reported in the following bug report:

My research indicates that you must apply the configuration to a “fresh” installation of Fuse. If you have an existing installation of Fuse, you must clear the bundle cache (i.e. completely remove the ‘data’ directory), then make the necessary configuration changes before starting Fuse for the first time. This is annoying if you have an existing installation and merely wish to add the feature to an existing installation.
Here are the steps I took to enable SMTPAppender on a fresh install of Fuse. Before starting Fuse for the first time, I applied the following steps:

1)      Added configuration for the SMTP Appender (as shown below) into [jboss-fuse-6.0]/etc /org.ops4j.pax.logging.cfg. Be sure to back up the file first. See updated configuration below:
           # Root logger

log4j.rootLogger=INFO, out, osgi:VmLogAppender, mail

# SMTP appender



log4j.appender.mail.layout.ConversionPattern=%d [%t] %-5p %c %x - %m%n



log4j.appender.mail.Subject=JBoss Fuse 6.0 Error Log Message


log4j.appender.mail.SMTPHost = mailrelay.mycompany.com








2)      In the  [jboss-fuse-6.0]/etc/jre.properties file I needed to comment out the javax.activation;version="1.1".

This ‘jre.properties’ file allow you to define in Fuse ESB which packages should be provided by the JRE and which can be provided by application bundles. To customize this you can tune the ‘jre.properties’ file. So, you can update this file to prevent JRE’s package ‘javax.activation’ from being exported.

Again make a backup of ‘jre.properties’ before modifying.

Now modify the file as indicated below: Note there are two occurences of ‘javax.activation’ (for JRE 1.6 and 1.7) and you should comment out both.

# Standard package set.  Note that:

#   - javax.transaction* is exported with a mandatory attribute

jre-1.6= \

 javax.accessibility, \

# javax.activation;version="1.1", \

 javax.activity, \

 javax.annotation;version="1.1", \

 javax.annotation.processing;version="1.1", \

 javax.crypto, \

 javax.crypto.interfaces, \

 javax.crypto.spec, \

# Standard package set.  Note that:

#   - javax.transaction* is exported with a mandatory attribute

jre-1.7= \

 javax.accessibility, \

# javax.activation;version="1.1", \

 javax.activity, \

 javax.annotation;version="1.1", \

 javax.annotation.processing;version="1.1", \

 javax.crypto, \

 javax.crypto.interfaces, \

 javax.crypto.spec, \

 javax.imageio, \

3)      Because (previous step) will block the JRE’s version of the package ‘javax.activation’ from being exported, we can force a different version of this package to be used by Fuse.  To do this, copy the following JAR to [jboss-fuse-6.0]/lib/endorsed

You can retrieve the JAR file from the following public maven repository:

Or, assuming you have maven installed, the following command will download the JAR file into your local maven repository:
mvn org.apache.maven.plugins:maven-dependency-plugin:2.8:get -Dartifact=org.apache.servicemix.specs:org.apache.servicemix.specs.activation-api-1.1:2.0.0.redhat-60024

Note:  The org.apache.servicemix.specs.activator-2.0.0.redhat-60024.jar should be in the [jboss-fuse-6.0]/lib directory, and should already be there by default.

4)      The Log4J SMTPAppender requires a mail bundle be deployed into Fuse. Therefore we must add a mail bundle to Fuse. The recommended way to do this is to configure Fuse so that the bundle is deployed on start-up. Define your own custom feature with a start level low enough to take precendence. I picked up this trick up from KARAF-3067.  This ensure supported version of javax.mail API is installed and used. 
            I created a feature called ‘javax.mail’.  The feature descriptor is called ‘javax.mail-1.0.0.redhat-60024-features.xml.’


<feature name="javax.mail" version="1.4.5">

   <bundle start-level="7">mvn:javax.mail/mail/1.4.5</bundle>



Copy above feature descriptor to a location where Fuse can find it. Create a directory ‘local-repo’ in the root installation folder. Fuse will look there by default for maven dependencies. The complete path to the location where I copied the descriptor was as follows: [jboss-fuse-6.0]/ local-repo/com/mycompany/features/javax.mail/1.0.0.redhat-60024

The name of the feature descriptor file in that directory was:

Remember, the actual name of the feature descriptor file and the location where it is copied matter (a lot). The location must match the standard well-known maven directory location based on groupId/artifactId/version coordinates.
5)      Add the custom feature ‘javax.mail’ to the set of features that Fuse installs at startup.
So once I have the feature descriptor file in the correct location where Fuse can find it, I will add the feature to the list of features that get installed at Fuse startup. To do this add the feature to the file ‘[jboss-fuse-6.0]/etc/org.apache.karaf.features.cfg’ .  Make a backup of this file, and below shows the necessary additions to this file.


# Comma separated list of features repositories to register by default














# Comma separated list of features to install at startup



Note the addition of the feature URL in the ‘featuresRepositories’ property and also the additional feature ‘javax.mail’ added to the front of the ‘featuresBoot’ list.

6)      Finally, after configuring everything -  I start Fuse. The first time I start Fuse after making the configuration changes described above, I got an ERROR that looked something like this:

14:25:46,185 | ERROR | s4j.pax.logging) | configadmin                      | 5 - org.apache.felix.configadm

in - 1.4.0.redhat-60024 | [org.osgi.service.log.LogService, org.knopflerfish.service.log.LogService, org.op

s4j.pax.logging.PaxLoggingService, org.osgi.service.cm.ManagedService, id=9, bundle=3/mvn:org.ops4j.pax.log

ging/pax-logging-service/1.7.0]: Unexpected problem updating configuration org.ops4j.pax.logging

java.lang.NoClassDefFoundError: javax/mail/MessagingException

        at java.lang.Class.getDeclaredConstructors0(Native Method)

        at java.lang.Class.privateGetDeclaredConstructors(Class.java:2532)

        at java.lang.Class.getConstructor0(Class.java:2842)

        at java.lang.Class.newInstance(Class.java:345)

        at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:336)

        at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:123)

        at org.apache.log4j.PaxLoggingConfigurator.parseAppender(PaxLoggingConfigurator.java:97)

        at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)

        at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615)

        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502)

        at org.apache.log4j.PaxLoggingConfigurator.doConfigure(PaxLoggingConfigurator.java:72)

        at org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.updated PaxLoggingServiceImpl.java:


I had to shut down and restart Fuse. The second start seemed to do the trick. At that point I can see the debug logging for SMTP (only from the Fuse console.) If the container is properly configured, and when you start Fuse (the second time) you will see some logging at the Fuse console that indicates the SMTPAppender is operational (per setting   log4j.appender.mail.SMTPDebug=true):

Note: I only saw debug output for log4j SMTP (based on   log4j.appender.mail.SMTPDebug=true) only if I started Fuse in foreground using './fuse' rather than starting Fuse in background with './start'.

7)      To test the email notification, deliberately cause an error in Fuse. For example, you could bogus spring file into the deploy directory (purposely include invalid XML so that the ESB gets an ERROR.)  You should see debug in the Fuse console indicating an email being sent. Or you may see an error of some sort indicating further configuration required for SMTP to work properly.


Thursday, April 12, 2012

Using Chainsaw to view Servicemix logs

Chainsaw is an open-source GUI-based log viewer. If you’ve ever wanted to utilize chainsaw to view Servicemix log file then this post is for you!
I use FuseSource distribution of Servicemix (aka Fuse ESB.
You can download Fuse ESB here:
I was able to visualize the Servicemix logs inside the Chainsaw GUI using the following steps:
1)      Install Chainsaw.
Download the latest distribution of chainsaw.
I downloaded the “Unix/Dos standalone version”
Extract to location of choice.
2)      Configure Chainsaw to listen for logging events on a given port.
Into the directory where I extracted chainsaw, I created a file “chainsaw-config.xml” with the following contents.
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration >
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true">
    <plugin name="SocketReceiver"
       <param name="Port" value="4445"/>
      <level value="debug"/>

The above configures chainsaw with a SocketReceiver that will listen on port 4445 to receive logging events.

3)      Configure Servicemix to send logging events.

Now to configure Servicemix to send logging event to this port you will update the Servicemix configuration. Edit the config file ‘org.ops4j.pax.logging.cfg’. Use the following as a guide for editing this file:

# the below line is an ‘edit’ of exising line – added a Chainsaw appender

log4j.rootLogger = INFO, sift, Chainsaw, osgi:VmLogAppender

# the below are completely new lines to configure the Chainsaw appender

log4j.appender.Chainsaw = org.apache.log4j.net.SocketAppender

log4j.appender.Chainsaw.remoteHost = localhost

log4j.appender.Chainsaw.port = 4560

Make sure the “remoteHost” is the IP for the machine where you are running Chainsaw (or localhost) if you plan to run Chainsaw GUI on same box as Servicemix. Once you save this file the changes will take effect immediately, should be no need to restart Servicemix.
You may see Servicemix complain about not being able to find a receiver for its events. That’s because you don’t have Chainsaw up yet. Don’t worry Servicemix will retry again once you stand up the Chainsaw GUI.
4)      Bring up Chainsaw
From the directory where you installed chainsaw, run the file ‘chainsaw.bat’.
When you start first time it will give you a warning about not having Receivers defined. You can select “Let me use a simple Receiver:” and select “SocketReceiver” on port “4560 (Default SocketAppender port.)”
5)      View the logs in Chainsaw
Since we are experimenting, you may want to turn up the logs in Servicemix to prove all the logs are reaching Chainsaw.

From the Servicemix console type:
log:set DEBUG root

This will turn up the logging to highest level.

Once Servicemix sends an event. You will eventually see a tab pop up in Chainsaw that contains the logs it receives from the Servicemix instance. You should see quite a bit of logging being displayed in Chainsaw.

To turn off the excessive logging in Servicemix and return Servicmix to the default logging level, from the Servicemix console type:

log:set INFO root

I'm not totally convinced yet whether this will be a useful capability. I'm open to suggestions on how this might be used, or whether people are using this sort of thing at all. I'm a little disappointed in the overall model, that is Servicemix must be configured to send logs to Chainsaw.

Monday, March 12, 2012

A Generic Pass-Through Route to solve a variety of Integration Challenges

Have you have ever needed to write a proxy (pass-through) route? This scenario applies to situations where you receive an HTTP request from a client, and simply need to redirect that HTTP request to another back-end server, returning the response from back end to the original client.

You might want to add some custom processing  or transformation to the message (or not) before sending to the back end service. Camel makes it easy to add such custom processing. I’ve seen cases where the simple ability to log and provide a “point of record” was enough to warrant the use of a pass-through route.  Another use for a pass-through would be to “fix” a request message  (so that it contains expected payload) before sending it on to the server – or manipulating a response into a suitable format for the client.  You can imagine a pass-through route that doesn’t touch requests coming from newer clients, but that can update requests (on the fly) that come from legacy clients so that those messages adhere to newer version, and eliminate need to maintain older back end server. 

So now that we’ve established the usefulness of a pass-through pattern – I will show you an easy way to implement this pattern. Apache Camel makes it very easy to write a route that implements the pass-through pattern. Here I will descried the steps of to create such a pass-through route is just a matter of minutes. You can create a new FUSE Project using the FUSE IDE. You can download FUSE IDE from here:

Once installed, FUSE IDE will allow you to easily create any type of route.

Below is the spring configuration that implements the proxy route.

<beans xmlns="http://www.springframework.org/schema/beans"

       xmlns:camel="http://camel.apache.org/schema/spring" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

       xsi:schemaLocation="        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd        http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">

       <camelContext xmlns="http://camel.apache.org/schema/spring">


                     <from uri="jetty:" />

                     <to uri="log:com.fusesource.proxytest?level=DEBUG&amp;showAll=true&amp;multiline=true&amp;showBody=true" />

                     <removeHeaders pattern="CamelHttp*"/>

                     <removeHeader headerName="Host"/>


                     <to uri="jetty:http://localhostt:3000/myservice/send" />




From the example, you can see the incoming component is the camel jetty. The Outgoing is also camel jetty.

The reason for the “removeHeaders” is basically to remove all the CamelHttp* headers that get populated by the incoming camel jetty consumer. Because certain headers get automatically populated and retained as the exchange passes through the route, the existence of these headers could potentially confuse and impact the behavior of the outgoing camel jetty producer. When certain headers are not removed, The outgoing jetty producer component (that sends requests to back end) gets confused, potentially over-riding the value configured for destination URL,  and you can end up with unexpected value for actual endpoint destination URL used by consumer component. Take care with these CamelHttp* headers and my advice is remove them in routes that go from jetty consumer to  jetty producer.  The basic idea is explained here:

You may be tempted to use the Camel HTTP component (camel-http) rather than 'jetty' for calling the back end in your pass-through route. The 'jetty' producer component has better performance under load, so it is almost always the right choice for this pattern.

The Jetty HTTP client endpoint uses the Jetty library to implement a HTTP client. In particular, the Jetty HTTP client features support for HttpClient thread pool and non-blocking request/response.  See more info on benefits of the Camel 'jetty' producer component in the FUSE documentation for writing a pass-through route .

I’m attaching a working example. If you want to play with working code - you can download it from here.
The the following sub-projects:
proxytest - A maven project that implements the route. To build run 'mvn install'. To run the route in standalone mode you can type 'mvn camel:run'.

mockservice - A simple mock server that mimics a real back end service. Your pass-through route can use this to redirect requests to in the absence of a real back end service. To build type 'mvn install'. To run the mock service in standalone mode you can type 'mvn camel:run'.

Included, is a sample SOAP UI client project (see proxytest\proxy-test-soapui-project.xml) that you can import into SOAP UI, and use to send HTTP request into the route. The route will receive the request on port 5012, redirect to the mock service listening on port 3000. The mock service will simply echo back the request. The route will then return the response back to the original SOAP UI client.  You can use SOAP UI to perform load tests on your new proxy service!