Why a new generation? What’s wrong with the old one?
Traditional systems management tools focused on monitoring the health of individual components. Tools like IBM Tivoli, BMC patrol, CA Unicenter, and HP Openview, initially focused on management of servers, services, and resources. In those days, the equation was relatively simple – 100% cpu utilization = bad, 10% cpu utilization = good. However, the increasing complexity of applications introduced numerous new enterprise application components including databases, connection pools, webservers, application servers, load balancing routers, and middleware. The business service management industry followed shortly after, and began offering tools for database management , monitoring of network traffic, mining application metrics, and analyzing webserver access logs. Each of these business service management tools “speaks” a different language – database management tools speak in “SQL statements”, network traffic tools use “packets”, while systems monitoring report in “CPU and disk usage”.
So what happens when the application crashes or hangs?
What do you do if a single transaction suffers slow response times?
In comes the “war room”
To cope with the proliferation of information sources, enterprises came up with the notion of the “war room”. Whenever slow response times or poor performance of critical applications is detected, relevant personnel are grouped together into a room for brainstorming and joint monitoring. This involves a large amount of professionals, since a single transaction may flow through several infrastructure components. For example, a financial transaction will trigger an HTTP request to an apache webserver installed on top of Redhat Enterprise linux, which in turns calls a Websphere application server on a windows machine, flowing through an MQSeries queue, eventually querying an Oracle database. Members of the “war room” typically include Java and J2EE performance experts, Microsoft windows system managers, Unix (Linux, Solaris, HP-UX, etc.) system managers, database administrators (DBAs), Network sysadmins, and proxy specialists, just to name a few. This is a lengthy process that can take thousands of man hours to complete.
The new paradigm – Business Transaction Monitoring
The “new generation” of systems monitoring and management tools, widely referred to as Business Transaction Management (or BTM), offer a new approach. Instead of monitoring SQL statements, tcp/ip packets, and CPU utilization, Transaction Management tools view everything from an application perspective. In the world of transaction management , an application is considered as a collection of transactions and events, each triggering actions on the infrastructure. The goal is to track every transaction end to end and correlate to the information collected from the infrastructure. Such an end-to-end view enables to quickly isolate and troubleshoot the root cause of performance problems and start tuning proactively. This application-centric information base enables a group of professionals working together to “speak” the same language and focus on facts, rather than guesswork.
According to IDC (Business Transaction Management – Another Step in the Evolution of IT Management ), BTM (Business Transaction Management ) will likely become a core offering of established IT system management vendors, since it can contribute to almost every aspect of IT management – ranging from performance management , SLA (Service Level Agreement) management , capacity planning, to change and configuration management (CMDB).