User Guide : Webcast Events : Event Analytics and Quality of Experience Reports
Was this helpful?
Event Analytics and Quality of Experience Reports
Event and Account Admins may access the Event Analytics Reports once the event has ended.
*To access and download event analytics reports/metrics:
1. Navigate to User > Events > Event Calendar > Event Name.
2. Click the Reports button to access Engagement Analytics and Quality of Experience tabs.
 
Note: The Reports button will contain both Main Event and Pre-Production report access. If the Main Event has not yet started, only Pre-Production events will be displayed. If no Pre-Production dry runs were conducted, only the Main Event report will be displayed.
If no Webcasts have occurred yet, the drop-down will display, “No reports are currently available” until one has concluded.
3. The Download button also contains links to download the Chat, Questions & Answers, Attendees, and Polls reports that exports that data along with the information included on the tabs detailed below.
4. The Reports button features three tab views along with the Download button. They are: Engagement Analytics, Quality of Experience, Users, and Download.
Engagement Analytics
The Engagement Analytics tab displays the following information.
Total number of Attendees (internal versus public). Includes the Estimated number of Webcast attendees entered during the event setup if this field was utilized.
Total Viewing Time (internal versus public)
Average Viewing Time
Attendee Trends (sessions by time/duration). View how many attendees/sessions are connected to the event at any given time. This allows you to see when people connect/drop-off.
Top 15 Zones used to connect to the event.
Web Browsers used
Device Types used
Quality of Experience
The Quality of Experience tab displays the following information.
Average Zone Bit Rate Kbps (for all zones)
Average Zone Bandwidth Mbps (for all zones)
Average Zone Rebuffering Duration (in seconds for all zones)
Number of Rebuffering Events that occurred (for all attendees)
Multicast Errors (for all attendees)
Average Zone Bit Rate (for all zones, be event time)
Average Zone Bandwidth (for all zones, by event time)
Rebuffering events
Bars represent the total number of rebuffering events by zone
Dot on each bar represents rebuffering duration
Video Player (type used and numbers by zone)
Video Stream (type and number of streams by zone)
Multicast Errors experienced by zone
Quality of Experience FAQs
The questions below have been asked about the data viewed on the Quality of Experience tab.
Average Zone Bit Rate
Question: Can you please explain the “average zone” portion of this metric and what is meant?
Answer: The metric displays an average of the average bit rate in each zone. The average bit rate provides administrators a quick validation that, on average, users are experiencing the intended stream quality.
Question: Can you please explain what is considered a Great||Good||Bad||Poor value for the metric?
Answer: The expected bit rate is highly dependent on the underlying source, network and Zone configuration. The delta between “expected” and “actual” is what potentially indicates great/good/poor for customers.
Question: What is the “Normal” or acceptable (Min/AVG/Max) threshold that is expected for this metric?
Answer: An administrator that is familiar with underlying network capabilities should be able to compare the average bit rate to the expected bit rate. If the average is significantly different than what is expected it may indicate a misconfiguration of the source, network, DMEs, or zones.
For example, an average bit rate of 500Kbps would be great news in the case of an MBR stream source of 500Kbps and 256Kbps because it means most viewers are getting the best stream. However, the same average bit rate of 500Kbps would be bad news in the case of an MBR stream source of 1.5Mbit and 500K bit as it would mean that most users received the lower quality stream.
Question: How does this metric provide information about the “Quality of Experience”?
Answer: To the extent that the actual bit rate is significantly different (less) than expected it may indicate that viewers had a lower quality viewing experience.
Average Zone Bandwidth
Question: Can you please explain the “average zone” portion of this metric and what is meant?
Answer: This metric displays an average of the average bandwidth in each zone.
Question: Can you please explain what is considered a Great||Good||Bad||Poor value for the metric?
Question: What is the “Normal” or acceptable (Min/AVG/Max) threshold expected for this metric?
Answer: An administrator familiar with the underlying network capabilities should be able to compare the average bandwidth to the expected bandwidth. If the average is significantly different than expected it may indicate a misconfiguration of the network, DMEs or zones.
Question: How does this metric provide information about the “Quality of Experience”?
Answer: To the extent that the actual bandwidth is significantly different (less) than expected it may indicate that viewers had a lower quality viewing experience.
Average Zone Experienced Rebuffering
Question: Can you please explain the “average zone” portion of this metric and what is meant?
Answer: This metric displays an average of the average rebuffering experienced in each zone. Experienced rebuffering events are cumulative from the start of the Webcast and are defined as those rebuffering events that affect a user, typically with a visible “spinner”. This excludes event counts caused by the initial buffering at the beginning of an event.
Question: Can you please explain what is considered a Great||Good||Bad||Poor value for the metric?
Answer: A low average rebuffering event count and a low rebuffering duration are indications that end users did not suffer any pauses or spinners while viewing the Webcast. You should also look at these numbers in context of total number of attendees. A small number of rebuffering indicates few people had rebuffering events. The duration reflects the amount of time waiting for rebuffering for those attendees who had rebuffering events. It gives an indication that for those attendees that experienced rebuffering, this is how long (on average), they experienced a pause/spinner.
Question: What is the “Normal” or acceptable (Min/AVG/Max) threshold expected for this metric?
Question: How does this metric provide information about the “Quality of Experience”?
Answer: High rebuffering event counts may be an indication that a number of attendees experienced pauses and spinners while watching the Webcast.
Number of Experienced Rebuffering Events that Occurred
Question: Can you please explain “rebuffering” versus “buffering” when used on analytic reports and dashboards?
Answer: Rebuffering excludes buffering event counts caused by the initial buffering at the beginning of an event. Experienced rebuffering events are cumulative from the start of the Webcast and are defined as those buffering events that affect a user, typically with a visible “spinner” occurring for a user.
Can you please explain what is considered a Great||Good||Bad||Poor value for the metric?
Question: What is the “Normal” or acceptable (Min/AVG/Max) threshold that expected for this metric?
Question: How does this metric provide information about the “Quality of Experience”?
Answer: Under normal circumstances, a small number per user of rebuffering events is to be expected and typical. A large number (disproportionate to the number of users) may indicate some issues in the playback experience for users.
Multicast Errors
Question: Can you please define what is meant by a Multicast Error and how they occur?
Answer: Often, the multicast error is an indication that a viewer switched from a multicast stream to a unicast stream for viewing the Webcast. Multicast errors are not always result in a failover, however, if that zone has not been configured for failover.
Question: Is there a place that these “Errors” are logged and stored? If so, how are they accessed? Should we be taking a deeper dive into these “Errors” when they are reported?
Answer: Vbrick does not currently expose the details of multicast errors / player errors via the Rev administrative UI. Vbrick does extensively log and surface aggregates as part of the analytics displayed for the event. Vbrick is continuously working to enhance Rev’s analytics and diagnostic capabilities and may expose more of the detailed / raw level details of multicast errors (and other player errors) in the future.
Question: Can you please explain what is considered a Great||Good||Bad||Poor value for the metric?
Question: What is the “Normal” or acceptable (Min/AVG/Max) threshold expected for this metric?
Question: How does this metric provide information about the “Quality of Experience”?
Answer: Under normal circumstances, if a Zone and its underlying network are configured for Vbrick multicast (or flash multicast is enabled), then expect that a good proportion of the total attendees are consuming a multicast stream and a smaller number are consuming a unicast stream, and a small number of multicast error events to occur.
Users
The Users tab displays the following information.
Name (First Name, Last Name)
Viewing Time (Total time in Webcast)
Zone
Web browser used (Safari, IE, Chrome, Edge, Mobile browsers specified)
Stream Type
Rev Connect Peer Mesh Efficiency (only visible if Rev Connect zones are enabled)
Device Type (Mobile / PC)
The Find User search bar may be used to find a specific user.