Dear Reader!

Today I’d like to tell you a little bit about the mentorship program I’m managing at Quantum Open Source Foundation (QOSF).

This article is to serve three primary purposes:

  1. To give you some information about the program itself
  2. To help you understand how it operates for all the applicants
  3. To be useful for other people/groups organizing similar initiatives.

A quick overview of the program

(From the program’s website) The idea for this program comes from the fact that many people from diverse backgrounds want to learn more about quantum computing, but they face many challenges, such as:

  • Shortage of educational materials at the intermediate level,
  • There are many “blind alleys” that one can get into, which might severely slow down their progress,
  • It’s easy to get a distorted view of the field from the outside.

By pairing people with mentors, we want to help them overcome these hurdles and create interesting projects, which will, in turn, will help others.

So how does it work?

  1. People apply
  2. They need to finish screening tasks
  3. Mentors select who do they want to work with
  4. They do an open-source project with their mentor

In this article, I will focus on the first 3 steps.

Numbers

The timeline for the second batch was as follows:

  • We announced the program and opened applications on September 1st (2020)
  • We closed applications on September 13th
  • People had 2 weeks to submit solutions to their screening tasks, so the final deadline for those who applied on the last day was 27th September
  • We were evaluating screening tasks until roughly 20th October
  • We have officially started the new batch on 24th October

And the numbers:

  • We got about 750 applications.
  • Out of this, about 280 people submitted their solutions to the screening tasks.
  • 36 people got into the program

So as you can see, about 37% of people who applied actually submitted solutions to the screening tasks, and out of those, 12% got into the program.

I’d like to emphasize that this is a big deal by itself, as solving these tasks required some effort. After the program started, I asked people on the QOSF Slack how much time it took them to finish the tasks, and I got 64 responses with the following distribution:

  • Less than 5 hours (3 votes ~ 5%)
  • 5-10 hours (18 votes ~ 28%)
  • 10-20 hours (29 votes ~45%)
  • 20+ hours (14 votes ~22%)

This means that people collectively spent around 4,000 hours (very rough estimate) expanding their QC knowledge and building up their software skills. This is already a huge success when it comes to popularization and education! Sure, I don’t claim that they would be sitting idle otherwise, but many people gave very positive feedback about this experience and mentioned that such focused tasks really helped them learn many new concepts.

Screening tasks

There were 4 screening tasks – you can find them all here.

Here’s the general outline of what the tasks were about:

  • Write a circuit with a layered ansatz (somewhat resembling QAOA) and optimize its parameter to reproduce a certain state.
  • Create a VQA which reproduces 01> + 10> state
  • Create a simple compiler, which takes a circuit written using basic gates and translates it to a circuit using just a restricted gate set.
  • Write a Variational Quantum Eigensolver and find the ground state of a certain Hamiltonian.

For most of them, there are already software tools that allow you to generate solutions quickly; we explicitly forbid using them for solving these tasks as it would kill the purpose.

Each applicant was supposed to fill out a Google form and attach a link to the GitHub repository containing their solution.

We were assessing each solution using three metrics, using a scale from 1 (terrible) to 5 (brilliant):

  • Code quality
  • Presentation
  • Science

The assessment process was very subjective. It was more about answering the questions “Is this code well written?” or “Does this person understand what they’re doing?” rather than going through a checklist and assigning points for meeting certain criteria. In addition to that, each solution could get up to 3 bonus points if a person assessing a given solution particularly liked something about it.

We’re well aware that this is not the best, most fair evaluation system you could imagine. However, the purpose of this step is not to find the best solutions to the tasks. The aim is to filter out people who didn’t have the skills/time/motivation to finish these tasks.

Here are some statistics regarding the distribution of the points among all the tasks.

Excuse me for the issues with the x-axis labels in the first plot.

Selection process

Out of the people who completed the assessment task, mentors could choose with whom they want to work. They had full autonomy on that and could choose anyone from this pool based on their preferences. What we provided was guidance, and below you can find the information I shared with mentors:

  • I encourage you to pick people from diverse backgrounds and those who you think can benefit most from the program or who might have the hardest time getting into the field.
  • The points for the screening tasks are a little bit arbitrary. Different people were grading them, and each used different criteria. For me personally, it also probably varied from day to day. So this is how I would read the scores:
    • 7 and less – they did a rather poor job
    • 7.5 - 8.5 – ok overall, but there were some issues
    • 9 - standard solution, good but nothing special.
    • 9.5 - 11.5 – good, solid submissions
    • 12 and more – mostly model submissions, someone did a really good job!
  • Points only reflect the quality of the screening task and scoring was done regardless of someone’s education/experience. If a high-school student scored 11, it’s much more impressive than a Ph.D. student scoring 12.
  • The main goal of having it scored is to filter out people who are probably not a good fit (7 and less) and those that are exceptional (12+). All the people from 9 up seem to be good material for the mentorship and unless you want to work with someone exceptional, I’d pay more attention to the info they gave in their application rather than their score.

Mentors were free to choose how many mentees they wanted, as long as they knew they’d have time for mentoring them.

I think it’s important to point out that the score was only one indicator. For some mentors, it mattered a lot, while for others it didn’t matter at all. I know of many mentors that selected several participants and then reviewed in detail their application, CV, GitHub, website, etc., to make sure they’d be working with the right person.

There was an option to apply in groups, but some mentors also decided to form groups from the applicants on their own.

Takeaways

We’ve learned a lot from the application process. The key takeaways are around our internal organization and process, so I won’t bother you with the details (more automation, better consolidation of data, clearer communication, etc.). Instead, I’ll share where we’d like to improve in the next batch:

  • Make more decisions based on the feedback we got and on the data we collected.
  • Make the whole process more frictionless for both applicants and mentors (e.g. it took us way too long to get back to the applicants with the results).
  • Try to make the program more fair, diverse and inclusive (we have a couple of ideas here)
  • Improve our internal processes, so we can admit more people to the program.

Given the popularity of the program and the positive feedback we got on the screening tasks, we asked some of our most active applicants to help us run monthly challenges, so that others can better prepare for the next round of the applications.

Speaking of which, we plan to open applications for the next batch of the program at the end of January 2021, so stay tuned!

And huge thank you to the whole team working on making this program better:

  • Maggie Li
  • Lana Bozanic
  • Dario Rosa
  • Tom Lubowe

And:

  • Rafał Ociepa for editing
  • Ethan Hansen for helping with the plots :)
  • Pavan Jayasinha for reviewing the draft of this article

Have a nice day!

Michał