Quantcast
Channel: Parth on Livecycle
Viewing all articles
Browse latest Browse all 10

LiveCycle ES2 onwards performance issue and its resolution

$
0
0

 

Almost 2 years back I did a quick post rejoicing about JMS removal from LiveCycle and Work Manager API introduction in LiveCycle ES2. It’s been a while since then and this post summarises what benefits we get with Work Manager and what is critical to do in any LiveCycle ES2 + installations so Work Manager works in favor of you and not the other way around.

Important: ALL of the below information is based on my observations and testing against different LiveCycle ES2+ version on different platforms (os/database/app servers). ALL of the below information is related to LONG LIVED processes only. Repeat, LONG LIVED processes. The way short-lived processes are executed in LiveCycle is totally different to long-lived processes and are working the same way and are out-of-scope for this discussion.

Background:

  1. Before LiveCycle ES2 came out LiveCycle used JMS to facilitate the Long Lived process execution.
  2. When someone or any program invoked a Long Lived process the calls to start a process execution was delivered via JMS message to process engine.
  3. This didn’t work perfectly on high load on the servers.
  4. Then, we got LiveCycle ES2 version with Work Manager API which replaced JMS (my previous post on it is here)
  5. This work manager API (JSR-237) which was a good move forward (in my opinion) to allow container-manageable programming model for concurrent execution of work.
  6. Since that LiveCycle ES2 release it uses this work manager API to facilitate the Long-lived process execution. The latest release of LiveCycle is ES3.

 

The Performance degradation issue

As part of a project I did some investigation into how a long-lived process is executed from start to end. The complaints I heard at that time from some of my colleagues were around LiveCycle 2.5 takes much longer to execute our processes compared to LiveCycle 8.2.

As usual I did process analysis and tried to identify pain points into the process logic… etc but what stuck out the most was the same process took much shorter time on LiveCycle 8.2 compared to default turnkey LiveCycle 2.5 installation. This problem is visible and interesting when you have lot of steps in a process or have lot of sub-process calls.

There is definitely a performance degradation (from total time taken point of view) into the long-lived process execution when your process involves sub-process calls. What it comes down to is around Default value and how Work Manager works.

 

So this is how work manager works in simplest terms -

  1. A request for Long-lived process execution is made. From LC java API or any of the End-Point (aka Start-Point) etc.
  2. A job id returned to the caller and a record is created in LC database for Work Manager.
  3. Work Manager looks at this queue and starts process execution by doing some other inserts and updates in database tables and work is handed over to Process Manager etc etc…
  4. At any point of process execution if it has any sub-process calls then the same process is followed where a message/record is created for new job, work manager looks at that queue and executes the process and returns the results to caller if needed…

Note: For the curious minds.. I’ve attached a mind map file here to provide details on how database level details work. This is based on a blog post done by LiveCycle team.

 

So now coming back to what the problem is..and why the long-lived process executions are slower in ES2+.

Finding:

As the Work Manager looks at a queue once every minute by default it causes the degraded performance in ES2+ versions. This is a big problem once you have large LiveCycle Application where you have around 40-80 processes getting called to do some work. For small little long-lived processes (without not many sub-process calls) its not that bad.

Solution/Workaround:

So the solution or workaround for this situation is to tell Work Manager to look out for new work more often. Luckily the LiveCycle startup parameters have this setting which you can tweak. I set the parameter to 50 from its default value 1000 (the unit is milisecond) after testing what I needed. So the parameter looks like below on our Weblogic servers.

-Dadobe.work-manager.queue-refill-interval=50

Here are the full set of parameters that you can review and tweak based on need.

  • adobe.work-manager.queue-refill-interval
  • adobe.workmanager.debug-mode-enabled
  • adobe.workmanager.memory-control.enabled
  • adobe.workmanager.memory-control.high-limit
  • adobe.workmanager.memory-control.low-limit

Huh?:

I was surprised to not see this mentioned in the install guide or any other docs that are published for LiveCycle Es2 onwards. The default is set to so large value that there isn’t any ‘same as before’ performance for bigger LiveCycle deployments.

 

Few words for friends who work with LiveCycle:

These details provided above are my observations and findings on what we faced on a client project. Please share your experience on above item as I’m really interested to find out if you haven’t got the same issue (where Work Manager queue refill interval is set to too high by default) or tweaking that value has helped in your project.

Let me know if you need sample LCA and I’m happy to provide simple LCA which can be used to measure the outcome before and after the tweaks.

 

As always, ping me on @pandyaparth on twitter if you want to have a conversation on this or any other LiveCycle stuff!!


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles



Latest Images