<< Click to Display Table of Contents >> Monitoring Metrics |
Overview
This section provides at-a-glance metrics on case creation/closure, work item execution, Apps usage, AI operations, Delayed, disabled and pending tasks. It includes trend analysis to show usage over time.
Key figures are displayed, with links to detailed dashboards for deeper analysis.
To better understand these metrics, we need to understand the difference between Case and Workitem:
•A case represents an instance of a process. It encompasses all the activities, tasks, and data associated with a specific occurrence of a process. For example, if you have a process for handling customer complaints, each individual complaint would be a separate case. Cases track the overall progress and status of the entire process instance.
•A workitem refers to a specific task or activity within a case. It is a smaller unit of work that needs to be completed as part of the overall process. Workitems are assigned to users or the system to perform specific actions, such as reviewing a document or approving a request. Each case contains multiple workitems, each representing a step in the process.
In summary, a case is the broader process instance, while a workitem is an individual task within that process. Monitoring both allows you to track the overall progress of processes (cases) and the completion of specific tasks (workitems).
Home Monitoring metrics
The following metrics are displayed in the Metrics pillar on the Monitoring Center Home. These metrics collectively provide a detailed view of process efficiency, workload management, user engagement, and potential bottlenecks, enabling to make informed decisions and optimize their workflows.
Over time, you will observe this through the behavior of your processes. If you notice that case creation is increasing while case closures are not keeping pace, or if your case backlog is growing, you should review your processes. This indicates there may be factors hindering the efficient execution of work.
Pay close attention to the metrics below marked in red. When there is information listed in these panels, there are actions that can be taken.
Created Cases
This widget displays the cumulative number of cases that have been created within the specific time frame. The data calculated using a sum over time. The tile offers insights into process workflow efficiency.
Cases Backlog
This widget displays the total number of open cases over the specified period. It provides an aggregated view of open cases in 15-minute intervals, helping to monitor and manage case backlog effectively.
Closed Cases
This widget displays the cumulative sum of closed cases over the specified period. The metric is calculated by summing up the number of closed cases within an hourly interval, with data offset by 75 minutes to ensure accuracy in time-based reporting.
Created Workitems
This metric displays the total number of work items created over a specified period. The metric is calculated by summing up the new work items within an hour, with an offset of 75 minutes. It provides insight into the volume of work initiated in the system during this timeframe.
Workitems Backlog
This widget displays the total number of open work items accumulated over the specified period. The metric is calculated by summing the values of open work items in 15-minute intervals, offset by 15 minutes to account for recent data changes.
Closed Workitems
This widget displays the total number of closed work items over the specific period. It provides an accumulated count, reflecting how many work items have been completed or closed within the monitored timeframe.
Apps Used
This widget shows the count of distinct Bizagi Apps used within a specified timeframe. It calculates the metric by filtering logs to identify unique applications accessed. The result provides insight into application engagement and utilization.
Apps Users
This widget displays the number of distinct users interacting with applications over the given period. The metric is derived by filtering data to count unique user profiles accessing the applications, providing an overview of user engagement and activity levels.
AI Agents Used
This widget displays the total number of unique AI agents that have been utilized within the specified timeframe. Each AI agent is counted only once, regardless of how many times it was used during this period. This metric provides insight into the diversity of AI resources employed over time.
Ask Ada Chats
A chat is defined as the entire session of interactions that a user has with the Ask Ada assistant before the conversation context is lost; The conversation maintains its context as long as the user continues the interaction without leaving the assistant or the application. During a single chat session, a user may ask multiple questions and receive multiple answers, all within the same continuous interaction. When the end user exits the Ask Ada interface or the application and then returns later, the previous context is considered lost. Upon returning, any new interactions begin a new chat session, which is counted separately in the chart. This widget displays the total number of chat interactions handled by Ask Ada within the specified period. It calculates this metric by filtering distinct chat sessions, providing an overview of user engagement.
Delayed Manual Tasks
This widget displays the total number of delayed manual tasks, summing up the delayed manual work items. It displays how the number of tasks delayed has changed over time, along with the current count. This provides an overview of manual tasks that have not been completed within the expected time frame, highlighting potential bottlenecks in process execution. The graph turns red when it reaches 50 (not customizable).
Async Tasks In Console
This widget shows the total number of asynchronous tasks currently present in the Async console, requiring human interaction to move forward. It displays how the number of tasks in console has changed over time, along with the current count.
Reassigned Tasks
This widget displays the total number of tasks that have been reassigned within the given time frame. If there are more than 50 reassignments, the graph turns red to signal a potential issue (this number is not customizable). A high number of reassignments may indicate a problem in task allocation definitions that should be reviewed, as frequent reassignments over time suggest that tasks are not being designed or assigned correctly.
Disabled Jobs
This widget shows the total number of jobs that have been disabled. This provides insight into workflow interruptions and helps identify areas where job execution has been halted, requiring human interaction to move forward.
Last Updated 5/11/2025 8:29:24 PM