Back to the blog

30th HR Breakfast: Making 360° Feedback More Effective

On October 23, 2025, another of our popular HR Breakfasts took place in the pleasant atmosphere of Space Café in Anděl. This time, the focus

November 2, 2025
Markéta Borovcová

This time, the spotlight was on how to use 360° feedback more effectively—for both individuals and entire teams—with the support of AI-driven interpretation.

Together with 19 HR leaders and managers, we discussed and shared our experiences on how to design 360° feedback properly, how to integrate it into development programs, how to evaluate it, and how to connect it with other HR data so that it delivers truly valuable results that can strengthen individuals as well as teams.

We also touched on how AI is transforming 360° feedback into actionable development, and raised a few important questions related to AI interpretation:
Will the human touch and personal support be lost?
How can we handle participants’ emotions and ensure a sense of psychological safety?
And how can we explain transparently what AI does—and what it doesn’t?

A big thank-you to Jana Štoková and Lenka Dočkalová for sharing their valuable case studies!

Jana Štoková took us into the world of Norican Group

She shared how they used 360° feedback before and after a development program for both managers and talents outside managerial roles.
In this case, 360° feedback not only confirmed that the development program had a measurable impact, but also helped identify team-wide development themes, allowing the program to be adjusted accordingly.

Lenka Dočkalová brought her experience from the non-profit organization Sázíme stromy (We Plant Trees).

This organization operates on teal principles—there are no managers, decisions are made collectively, and autonomy of each individual is intentionally supported.
They have been working with 1:1 feedback for several years, and their main governing body, the “Stromorada”, is already experienced in this area.
The 360° feedback gave them the opportunity to gather insights from a broader circle of collaborators, especially from coordinators they don’t meet as often.

Lenka also highlighted a few areas where OrbiTal could improve—for example, praising and reinforcing high scores where someone truly excels, or adjusting team report recommendations to avoid repeating what the organization already practices.
We were happy to share that these improvements have already been implemented—phew!

After hearing both case studies, a lively discussion followed.
We explored how large the question set should be, and agreed that less is more.
It proved effective to focus on 5–7 key competencies, use a short rating scale, and keep a reasonable number of items (around five) to avoid rater fatigue.
Adding a few targeted open-ended questions also increased completion rates and improved the quality of comments.

AI Interpretation in OrbiTal

OrbiTal helped participants quickly make sense of their results and translate them into concrete action plans.

  • The team report provided HR and management with a clear overview of the group’s development needs.
  • We emphasized that AI is not “just a simple prompt.”
  • The quality of output depends on high-quality input data, a well-designed 360° process, and meaningful interpretation.

We also discussed mandatory vs. voluntary participation.
Experience shows that a voluntary approach—supported by a clear “why” and strong leadership backing—works best.
When scaling up, a clear policy (who participates, when, and why) helps, along with keeping voluntary participation at least for evaluators who don’t work directly with the participant.
The key is transparent communication and accessible support for everyone involved.

Our classic case study exercise

Despite a shorter session due to the lively discussion (thank you for that!), participants worked in groups with fragments of 360° reports, a fictional organizational context, and performance metrics (including the 9-box model).
Most teams identified key development themes almost identically to our OrbiTal team report—which was great confirmation!

Interestingly, one team even developed a “conspiracy theory” about hidden issues within the company 😄—a great reminder of how important critical thinking is when interpreting data and avoiding assumptions.

Three main lessons for practice

  1. AI isn’t a magic wand.
    It can be a powerful assistant—but it still requires high-quality input data and human expertise to calibrate and interpret results.
  2. Combine data sources.
    Linking 360° results, performance metrics, engagement surveys, and team context significantly increases the value of interpretation.
  3. Feedback will always be sensitive.
    It’s important to be aware of the risks, but don’t let them hold you back—360° feedback definitely has high added value when done right.

A huge thank-you to our guests from Norican and Sázíme stromy for sharing their experiences, and to all participants for their great questions and energy.

Want to feel the atmosphere of the breakfast? Check out the photo gallery.

Curious to see OrbiTal in action? We’d be happy to:

  • Show you sample individual and team reports
  • Prepare a pilot and help you set up competencies and scales
  • Create a custom question set together
  • Deliver individual and team reports in OrbiTal
  • Help you connect 360° feedback with development programs and measure impact

Other similar articles