Analyzing Talent Development Data from the Bend Endurance Academy

Research > Science and Community > The Climbing Athlete’s Environment

The Climbing Athlete’s Environment

***As per usual, skip to the orange text for highlights.

Mike Rougeux’s team, the Bend Endurance Academy (BEA), punches hard for their numbers.  At this year’s 2019 Bouldering Youth National Championships, the team took 8th place out of 88 teams with an extremely low 5 total climbers.  The average numbers for the top ten teams was 13.5 climbers per team, with the next lowest team at 8 and the max at 20.  In fact, if you look at points per team member, the numbers are a little eye opening.  Mike’s team averaged 43 points per team member, compared to 31.6 points (Stone Summit), 27 points (Team ABC), and 25.6 points (Vertical World) for the top three teams.  That’s not exactly a fair comparison, since the rules state that you only get points for your top competing member per youth category, so it would only be possible for a team to be scored based on 10 climbers.  

“Points per climber” is not the best metric. It doesn’t take into account the challenges of managing large teams or filling out smaller teams. Some day I’m going to have invest time into a fairer comparison.

However, with more climbers you have more opportunity to do well and with fewer climbers you aren’t competing in all categories.  As a result, it’s a qualified argument that is only intended to provide a little context for this analysis.  Special note: Congrats to Mike and his athlete Mira Capicchioni who took the gold medal in Category Female Youth B in Bend, Oregon (February, 2019).  What a fantastic hometown win!

In December 2018, Mike reached out to me about a blog post I wrote (see summary below) regarding the Talent Development Environment Questionnaire, or TDEQ.  After some discussions about the intent of the survey with a UK researcher, I agreed to provide a copy of the survey to youth team coaches who asked.  The goal of Mike’s intent was to (1) help his athletes understand how their environment is both supportive and not supportive of their performance, but also (2) to help the program understand where it can do better.  I also provided it to several of my own athletes (I work with a small handful of athletes from around the country).  Mike provided his survey to his athletes to complete anonymously, while I did not. 

In Sum

  • Use the survey anonymously to focus at the team level, and use it non-anonymously to focus at the individual level.
  • Look across the Factors and prioritize based on question average.
  • Look across the questions and prioritize based on score, standard deviation, and weight.
  • We need a formula for incorporating weight in order to take the guesswork out.
  • The Bend Endurance Academy seems to have its Long-Term Development Focus, Fundamentals, and Support Network down, but should consider prioritizing its Quality Preparation and its Challenging and Supportive Environment.

What does the survey measure?  You can see a summary of the survey at this link.  It is a seven factor, 59-question survey for gathering data about the non-climbing related factors in an athlete’s environment that research suggests impact sport performance in general.  None of it is based on climbing-specific research.  Each question is a part of one of seven groups or “factors.”  And each question is graded 1-6 starting with “Strongly Agree” (1) and ending with “Strongly Disagree” (6).  Lower overall scores are better.  The lowest score possible is 59 and the highest is 354. 

Analyzing the Team vs. the Individual

The team scored an average of 129.2 ±17.54.  I interpret this to mean that the athletes like BEA.  Because I only teach a small number of climbers, and to ensure anonymity, I gave my survey to five athletes but only took data from two (random).  My athletes scored 169 and 126 (avg: 147.5) as a way of comparison.  Mike wins. 

The values I provided are only from two athletes: as a form of overly-simplistic comparison.  My athletes all have multiple coaches and are on teams I am not affiliated with.  As a result, I can’t use these numbers to rate my “program” since this survey stresses the overall environment – I’m just one factor (no pun intended) in their response.  It’s a bad idea to compare my numbers with his, but absent a better option I’m doing it anyways.  This should hopefully encourage you and your team to not only consider using the survey but also (not contingent upon the survey, which is open source and compliments of R.J.J. Martindale et al.) providing anonymous, aggregated data.  You will not see any of Bend’s individual scores on here.  🙂

Unlike Mike, however, I do know what each of my athlete’s scored, and so I was able to address each athlete individually rather than as part of a team.  My approach was to go over all questions rated a 3 or 4 or higher (the only 3’s I targeted were those I felt we could improve based on my personal skill-set).  Each conversation lasted more than two hours.  Because Mike didn’t know how each of his athletes individually scored, he used averaged data to prioritize the questions he wanted to address with his team and then discussed them in practice.  Using his cut-off score of 2.8, he had 9 prioritized questions:

Table 1

Mike’s Comments by Question:

On Factor 1, Question 22: “There’s currently no other competitive team in town so they really don’t have anywhere to go.”
On Factor 1, Question 23: “Not sure if this was due to miss-understanding the question.  Need to figure out why that perception is there.
On Factor 2, Question 2: “We tend to focus on setting the athletes up for how to prevent things from going wrong but need to address how to deal with things that don’t go their way.”
On Factor 2, Question 4: “More focus on this as we thought we have been speaking to this but might not be resonating with the climbers or maybe the younger climbers?”
On Factor 4, Question 1: “This one we were most surprised with as we check in with the kids a lot.  More attention can be focused on making sure the athletes know that we are checking in on their well-being.”
On Factor 5, Question 1: “We have a nutritionist as one of our coaches but the athletes may not know they can utilize her as a resource.  As the organization grows consider partnering with more professionals that can help with sports dev.” 
On Factor 5, Question 5: “Hard to come up with a way to bridge that gap as a scholastic sport.”
On Factor 6, Question 1: “Hard to come up with a way to bridge that gap as a scholastic sport.” 
On Factor 6, Question 4: “On a team level I feel our athletes do get help from the more experienced performers.  Maybe see about having visiting pro climbers come in to practice more?”

Prioritizing Among So Many Questions

One of the challenges associated with the information is how to prioritize what you focus on, especially with a 59-question survey that says not one word about climbing.  My preference for prioritization involves the following framework:

  • Factorial Focus
  • Earlier questions (by number) are weighted more heavily than later questions.
  • Standard deviations

For a larger program, Mike decided to identify only outlying responses that had an average score above 2.8.  Later, Mike and I discussed the potential to identify outliers in his data.  He used maxes and minimums to give him a sense of what his outliers looked like across the data.  Additionally, I calculated standard deviation (SD) for his numbers in order to see how uniform his responses were.  This is what they look like at the factor level.

Factor Comparison

First, the obvious: Mike’s Long-Term Development Focus, Fundamentals, and Support Network appear to be solid.  Additionally, if all I went off were my own numbers (again – bad idea) then his ability to understand the athlete is phenomenal.  He could probably work most on his quality preparation and his challenging and supportive environment.  As a result, I went to Factor’s 2 and 6.  Two other questions were approaching 2.8 for Factor 2: Guidelines for progressing in the sport are unclear, and peer pressure to do things that are different from the coaches.  My opinion is that the former, especially since it’s an earlier question with a higher weight, should be given more priority.  The number of “approaches” to climbing are insane, and as the head coach for one of the top 3 teams in the country told me in Bend: “Taylor, we’re just guessing.”

With Factor 6, I also took a look at question 2, since it was weighted higher: the athlete is regularly told that winning and losing in the present doesn’t indicate future success.  This is an area that seems simple to implement, but I’m not entirely clear on effective approaches for ensuring the athlete internalizes the point.

Two-thirds of Mike’s Factor 1 values fall within .49 of 1.84, or between 1.35 to 2.33.  That range is larger than for Factor 2, even though Factor 2 has a higher average score.  However, because it doesn’t rise to a level that meets his minimum threshold (2.8), is it really a priority?  Standard deviation can be quite useful for anonymous responses to determine whether your athletes view your program cohesively.

Average Scores by Question with Factor Groups by Color

As you can see, Factor 6 is both high and has the highest amount of variation in the scores.  After that, you have quality preparation which is high, but has a low amount of variation, and support network, which is third highest, but has the second highest amount of variation.  Long-term development focus is the one I might be looking for the occasional outlier on, but because the average is so low it might not be worth your time.  This is one reason for the importance of a “question focus.”  

The argument I can make for a factorial focus is that it may help galvanize resources toward a heuristic, or a simplification, such as: “we need to build our support network.”  This may help you become more creative surrounding solutions to the overarching issue.  If you want your team to “perceive” the importance of a factor, it may well be positive to have a strategy associated with the overarching factor (e.g. creating partnerships with local specialists), even while you try to pinpoint the specifics you want to change based on each individual question.  Just don’t forget to remember that individuals on your team will have different perceptions, and it may be positive to identify what those different perceptions are even if you make the survey anonymous.

After Nationals in Bend, Mike and I exchanged a few more words.  His final comment to me: “I’ve worked quite a bit on our team culture / environment the last year and have been happy with how things have progressed with our team overall.  Nationals was a good visual of that increased team culture for me with tons of teammates rallying for the BEA competitors, volunteering, and of course climbing well.  Super psyched on Mira’s National Title and for our small team of 5 climbers at Nationals for finishing 8th in the team standings.”