Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Computing
Conference & Expo
November 2-4, 2009 NYC
Register Today and SAVE !..

2008 West
DIAMOND SPONSOR:
Data Direct
SOA, WOA and Cloud Computing: The New Frontier for Data Services
PLATINUM SPONSORS:
Red Hat
The Opening of Virtualization
GOLD SPONSORS:
Appsense
User Environment Management – The Third Layer of the Desktop
Cordys
Cloud Computing for Business Agility
EMC
CMIS: A Multi-Vendor Proposal for a Service-Based Content Management Interoperability Standard
Freedom OSS
Practical SOA” Max Yankelevich
Intel
Architecting an Enterprise Service Router (ESR) – A Cost-Effective Way to Scale SOA Across the Enterprise
Sensedia
Return on Assests: Bringing Visibility to your SOA Strategy
Symantec
Managing Hybrid Endpoint Environments
VMWare
Game-Changing Technology for Enterprise Clouds and Applications
Click For 2008 West
Event Webcasts

2008 West
PLATINUM SPONSORS:
Appcelerator
Get ‘Rich’ Quick: Rapid Prototyping for RIA with ZERO Server Code
Keynote Systems
Designing for and Managing Performance in the New Frontier of Rich Internet Applications
GOLD SPONSORS:
ICEsoft
How Can AJAX Improve Homeland Security?
Isomorphic
Beyond Widgets: What a RIA Platform Should Offer
Oracle
REAs: Rich Enterprise Applications
Click For 2008 Event Webcasts
In many cases, the end of the year gives you time to step back and take stock of the last 12 months. This is when many of us take a hard look at what worked and what did not, complete performance reviews, and formulate plans for the coming year. For me, it is all of those things plus a time when I u...
SYS-CON.TV
SOA or DOA
Web applications built on a service-oriented architecture (SOA) promise to greatly improve IT efficiency

To tackle these challenges so that the SOA application is not DOA, IT needs two capabilities at their disposal. First, IT has to be able to monitor application performance as experienced by the real user because that is the only place where the performance of all of the constituent services is felt. Furthermore, if a service-level violation is detected, they have to be able to quickly trace the offending transaction and pinpoint the cause of the performance problem. Capitalizing on these capabilities in a systematic way allows IT to:

  • Improve service levels by discovering performance bottlenecks and shortening the time to problem resolution.
  • Lower the costs (and frustrations) of operation management by eliminating unnecessary triage meetings and fruitless problem recreation attempts.

How do existing end-user monitoring techniques fare in delivering the capabilities IT needs to avoid application DOA? Not well. Let’s take a quick survey of existing monitoring techniques as it applies to SOA performance management:

  • Sniffers or other packet capture appliances can estimate the round-trip response time of a packet algorithmically, but lack the ability to measure the response time of transactions whose path does not pass through the point on the network where the appliance is installed. Take mashup applications as an example – critical data objects are supplied by third-party applications, potentially bypassing any sniffers installed in front of the Web servers. In the data center, the sniffers are also blind to Web Services calls among servers or third-party services that form the basis of an SOA application.
  • Server monitoring tools can only report on the transaction response time of the infrastructural silo they are monitoring. For example, popular J2EE application server monitoring tools measure only the response time on transactions that involve the application server. Transactions served directly by the Web or third-party servers, which never touch the application tier, cannot be managed by the J2EE monitoring tool.
  • Traditional Website performance monitoring services can detect whether an SOA application is available or not. It cannot report on performance as experienced by real users or provide actionable information that pinpoints the cause of problems to guide corrective effort.
  • Pure-play SOA management products can help IT model the interdependencies among various services and provide limited transaction path information, but are often blind to the health of the infrastructure that supports the orchestration. More important, they have no visibility into the ultimate performance as experienced by the end user.

In terms of providing “real” actionable information for managing SOA performance, these legacy tools are deficient not only in the type of performance data they collect, but also in where the data is collected. It is critical to define application performance as a response time as perceived by the end user instead of server, network, J2EE, database, or other silo-oriented metrics. There is no argument against the fact that the experience of the end user is the only thing that matters. Moreover, for mashup applications where the Web page is served by multiple servers or third-party data centers, or when the application is delivered using content delivery networks, the application might not even come together until the content arrives at the browser. As a result, the only valid measurement of good or bad SOA application performance is the one measured directly at the real user’s browser.

To deliver real user monitoring and transaction tracing capabilities to avoid SOA going DOA, IT needs three integrated functions in their SOA performance management tools:

Detect: “You cannot manage what you cannot measure.” Having a quantitative way to determine whether the SOA application meets service level requirements is the first step in SOA management. In other words: “Is the right application response (data, page, action, etc.) delivered to the right user in the right amount of time?” There are numerous QA techniques to ensure that the right application response is delivered. Furthermore, most organizations have the necessary security to assure that the right person is receiving the information. But assuring that the information is delivered at the right time to the end user through the complex Web-based SOA infrastructure is another matter. Having the ability to non-intrusively monitor application performance as experienced by real users is an absolute necessity because it is the (1) only way to accurately detect problems experienced by real users of SOA applications for service-level assurance and reporting, and (2) it provides a key driver for making process or application response time improvements. The starting point of such monitoring is the end-user’s browser, where the application truly “comes together.” It is at the browser that IT can take into account “last mile” circumstances and identify whether an incident has occurred that will affect user satisfaction. Data collected by legacy tools that focus on monitoring a particular technology silo – like network routers, Apache Web servers, WebSphere application servers, or .NET Frameworks – cannot be extrapolated to determine what the actual end users of complex SOA applications are experiencing in the browser.

Isolate: Once the application performance as experienced by the end user is known, it has to be correlated with the performance profile of all the infrastructure and application components involved in the delivery of the SOA-based response. Since composite applications (1) are made up of services that are “black boxes” whose performance cannot be controlled or tuned by those orchestrating the application, (2) run on physical or virtual infrastructure components that are not entirely under the control of IT operations, and (3) may have different parts of a transaction served by different data centers or servers including third-party service providers, it is important that the performance of each transaction is reported and correlated across all infrastructure tiers, third-party data centers, and application components. Performance correlation can be achieved by painstaking log file analysis and heuristics to match up IP addresses and request times across various tiers, but this methodology is error-prone and difficult even if access to all of the logging information is available, and impossible if the transaction touches a tier outside the data center where log files are unattainable. Another simpler mechanism is to tag each transaction originating at the end-users’ browser non-intrusively and dynamically trace it through the entire infrastructure, logging appropriate performance data at each tier. Such an end-to-end view of performance based on the real user’s experience offers the bird’s eye view needed to pinpoint incidents, errors, bugs, or bottlenecks that impact end-user response time.

Optimize: A holistic browser-to-database view of transaction performance provides actionable information so that ad hoc or trial-and-error approaches are no longer needed to identify and respond to performance problems. Without actionable information, IT incident response teams will likely spend more time debating the cause and attempting to re-create the problem than implementing a fix and restoring the business function. By analyzing correlated transaction performance information over time, IT can identify leading indicators of performance concerns so they can proactively resolve them before an incident impacts user satisfaction or business productivity. Furthermore, the information also helps identify areas for performance improvement in the infrastructure, services, and application.

Having these three functions integrated into a single SOA performance management tool gives IT an early response system to detect and react to end-user performance problems before they impact thousands of users or lead to costly site outages. Information on business impact or performance bottlenecks should be fed back to the operations staff for infrastructure or process improvement and to the developers for application optimization.

Yes, SOA can greatly enhance business agility and lower cost of application development. However, without a real user-oriented approach to manage SOA deployment and production management systematically, it is highly likely that the SOA application will be DOA.

About Hon Wong
Hon has served as CEO of Symphoniq Corporation since its inception. Prior to joining Symphoniq, Hon co-founded NetIQ, where he served on the board of directors until 2003. Hon has also co-founded and served on the board of several other companies, including Centrify, Ecosystems (acquired by Compuware), Digital Market (acquired by Oracle) and a number of other technology companies. Hon is also a General Partner of Wongfratris Investment Company, a venture investment firm. Hon holds dual BS in electrical engineering and industrial engineering from Northwestern University and a MBA from the Wharton School at the University of Pennsylvania.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

This article exaggerates the performance and troubleshooting problems with SOA. Most of the time, simple logging is sufficient for identifying performance problems and transaction failures. Any technology that is misused will have performance problems. If a centralized orchestration platform is used (ESB, process controller, orchestrator, etc.), this piece of a SOA application can usually provide more than enough trace information to deal with problems.

The biggest problem with SOA is its overuse. Enterprises must identify "sweet spot" applications for SOA technologies. Otherwise, "traditional" application integration and construction methods work better. For example, instantiating an object and calling a class method is much faster than coupling two layers of an application with a web service call.

For client applications that place the orchestration on the client side, this is a mistake. Mash-up approaches are not appropriate for applications where transaction failures have real consequences. One of the serious mistakes in the application of SOA is to use Web 2.0 web interface analogies in the the lower layer of business technologies. However, a well written application that isolates orchestration away from the presentation layer can be monitored, measured and tracked and done so relatively easily.

Pragmatic SOA with structured design patterns used up front can get around many of the issues the author has identified. Sloppy spaghetti code without a coherent architecture in any platform will always be troublesome.


Your Feedback
Chris Weiss wrote: This article exaggerates the performance and troubleshooting problems with SOA. Most of the time, simple logging is sufficient for identifying performance problems and transaction failures. Any technology that is misused will have performance problems. If a centralized orchestration platform is used (ESB, process controller, orchestrator, etc.), this piece of a SOA application can usually provide more than enough trace information to deal with problems. The biggest problem with SOA is its overuse. Enterprises must identify "sweet spot" applications for SOA technologies. Otherwise, "traditional" application integration and construction methods work better. For example, instantiating an object and calling a class method is much faster than coupling two layers of an application with a web service call. For client applications that place the orchestration on the client side, this is...
SOA World Latest Stories
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes ...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and sy...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand usin...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portabil...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is founda...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder an...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021


SYS-CON Featured Whitepapers
Most Read This Week
ADS BY GOOGLE