Enhance Learning Outcomes with Metrics that Matter: Key Features & Benefits
Discover how Metrics that Matter by Knowledge Advisors improves learning effectiveness with cloud computing, business intelligence, and robust reporting.
File
Conferences
Added on 10/01/2024
Speakers
add Add new speaker

Speaker 1: Welcome to the Metrics that Matter conference module overview by Knowledge Advisors. How does our learning measurement software improve the effectiveness and business outcomes of learning, you ask? Here are five important features and benefits. One, best practice. Knowledge Advisors is a trusted source of thought leadership for evaluating the effectiveness of critical human capital initiatives. Two, cloud computing. Metrics that Matter, our flagship software, utilizes a cloud-based architecture to enable easy implementation and integration with other enterprise systems. Three, integrated business intelligence. Metrics that Matter integrates data from multiple enterprise systems with information collected through evaluations and assessments. Analysis is rendered through automated dashboards, scorecards, and detailed customizable reports. Four, informal and social learning measurement. Metrics that Matter reaches beyond formal learning programs and learning management systems to measure learning effectiveness in informal and social learning environments. And five, benchmarking. Metrics that Matter includes a benchmark database with over 650 million external data points and 100 standard reports. Benchmark data can be sliced by industry, course type, job function, and more to compare your learning effectiveness against external averages. Our data collection, storage, processing, and reporting system is what you need in order to accomplish your measurement goals in a user-friendly environment. We can collect, store, process, and report your data in a timely manner, allowing you, the user, to access very visual, concise, and articulate reports. Our system has powerful capabilities to take your most basic inputs from class attributes to evaluations, tests, surveys, and other business data and translate those inputs into outputs into the form of concise and clear dashboards and reports. Now let's take you through the three components of our conferencing module, authoring, data collection, and reporting. The authoring or setup of conference evaluations functions similarly to the learning analytics module. You have the ability to create a single conference program as well as include multi-tracks or sessions for each day. The copy or edit features allow you to quickly build or edit your sessions. We can even integrate with your registration system to pass attendee details to metrics that matter. Our data collection allows for both overall and session-level feedback gathering. Conditional questions drive attendees to unique sessions they attended. And we have the ability to auto-send surveys so you can gather the feedback when you feel it's appropriate. Our robust reporting suite for your conferences will allow you to benchmark your data and calculate ROI. Just like our premier learning analytics module, you may view individual sessions or aggregate your data over time. Doing a comparative analysis by session, for instance, may add value when planning future conferences. Lastly, our consolidated summary report is a one-stop shop for complete narrative around your meeting, program, or conference. Now let's go to metrics that matter live. When scheduling or adding a conference to the metrics that matter system, you'll have the ability to add it or search for it. Once the conference is set up in the system, we then have the ability to edit, go to the surveys that have been assigned, a summary page of the entire outline of the conference or multi-track event, create a template of that event, or copy it for another session. Let's take a look at the summary. From the summary, we can see the overall conference name and preview the evaluation. In addition, we can see each day and session that have been configured in the MPM system. Once the evaluations have been configured in the system, you're ready to send them to the participants or attendees. The attendees will access the evaluation online and they will start off with a conference overall evaluation. Once going through there and answering questions around the logistics, environment, learning effectiveness, and so forth, they get to the bottom of the evaluation and they get to select the specific sessions that they attended. You will notice the first session is already checked and grayed out. This was mandatory for all attendees in our example. Then each attendee will choose the specific session that they attended in the conference. Once they submit the survey for the overall conference, the next day one session one survey or evaluation is presented to them. Likewise, the last session that they attended, day two, session one in this case, the same shows you the facilitators of that particular session and they answer the questions around content, learning effectiveness, and return on investment and submit. Once that data is collected, then we can go in and look at the reporting side of the system. First off is the conference summary. Once the conference summary is generated, it will be emailed to you in the form of a Word document. That Word document is our overall conference summary narrative. It contains all the information for your overall sessions as well as individual sessions. Here we can see a conference evaluation data summary which includes overall satisfaction as well as individual instructor score by category. Including the session level data or instructor data, we can quickly see which categories in the overall conference survey scored the highest or where there may need to be some room for improvement. Next we can see each course or session of your multi-track event staff ranked to each other as well as an average of all your sessions. Next the data below is going to show you your instructors and how they compare to each other and the overall average for the instructor category. Each session is then ranked and you also get an overall number of responses. Again, all in the same narrative report. If we scroll towards the bottom of the report, we'll see that overall conference qualitative feedback is included as well as individual session feedback for each one of the multi-tracks in your conference. The next report we'll look at is the quick question report for the leadership multi-track event. In this case you'll see that the report is broken down into overall summary by category. This particular report includes end counts, the number of evaluations submitted per category, frequency distribution in your Likert scale, as well as the mean down the right-hand column. Once we get into the report and want to drill down a little bit further, we can look at the individual questions by category as well as verbatim comments. Included here are session level details. This takes it a little bit deeper than what we saw in the narrative report, so now we can look at each individual session, get an overall summary as well as question level detail with the means and frequency distribution built in. At the bottom of every report in the system, we'll notice there was a report recommendation. This supplies you with information and ideas around interpreting the data, recommended actions, and then complimentary one-click reports to learner comments, variance analysis, or even a class level detail. At the top of every report is our collaborative tool set. You can schedule this report to be sent to all of your facilitators. You could email it to yourself, print it, convert it to Excel or PDF, or save it for future reference. Now in summary, the conference meeting multitrack programs within Metrics That Matter are easy to create your overall and session level evaluations. The electronic data collection allows you to build in conditional questioning, comparative analysis by event or topic with benchmarking as well. We greatly appreciate you viewing this recording and hope you found it valuable and useful. Please contact us either by phone or email using the information on your screen. Thank you.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript