Dear Reader,

To say that these days we are flooded with information is an understatement. Obviously, it also touches the domain of quantum computing (QC) —there are a lot of articles, and many (but definitely not all!) of them are either low quality or insignificant. I would like to present to you a couple of methods on how to select valuable articles to read. Even though I’m giving examples from the world of QC, many of the methods presented here are universal.

This post is not about reading scientific papers - even though a lot of methods described here also apply, the art of reading papers is much more subtle. That’s what I meant by “popular” in the title.

And one more thing - at the bottom there is a list of all links to external resources I used throughout the text.

Motivation

I will start with some thoughts on the nature of information and human psychology. This will give you some rationale behind the specific tips presented in the second part of this article. Descriptions in this section are by no means exhaustive; and if you’re curious, please follow some of the links/references provided.

Lindy effect

The Lindy effect is a theory that says that a good way to estimate how long some non-perishable thing (like technology or idea) will be in use is to see how long the given thing has already existed.

A couple of examples:

  • We read Plato’s dialogues for 2,300 years, so it’s quite possible, that in the year 4300 we will still be reading them. But “The Da Vinci Code” has been read for the last 16 years, so people may still read that 16 years into the future, but it is doubtful if it will be read in 2100, not to mention 4300.
  • We have used technologies such as “printed books” for 500 years or so, and “e-ink” for about 20 years (the first patent was filed in 1996). So if we want to make predictions about the year 2100, we can say with a high degree of certainty that we will still be reading printed books then. But will we still read books using e-ink? Probably not —some other technology might replace it. Keep in mind, however, that with each year passing that e-ink does not become obsolete, we have more and more certainty that this technology will indeed prevail.

Here I have only scratched the surface —Nassim Nicholas Taleb describes this topic in detail in his books (as far as I can recall mostly in Antifragile).

The half-life of knowledge

We know that certain element isotopes have half-lives: suppose we have 1kg of Uranium 232. Then after roughly 69 years we will have only 0.5kg of Uranium 232 (and about 0.5 kg of some other materials).

As it turns out, we can use the law of exponential decay to describe other substances, such as beer froth (this discovery won IgNobel prize in Physics in 2002). But also knowledge — it turns out that a lot of knowledge today becomes obsolete really fast (Michael Simmons explains this concept in one of his articles). The rate at which it happens depends on the domain — knowledge in sociology is getting older much faster than knowledge in physics. But neither are resistant to this effect.

Lifecycle of technology

Back in 2014, I had an opportunity to be at a lecture by Raymond LaFlamme (one of the pioneers of QC). He presented a simple model of the technology lifecycle. I don’t remember it perfectly, but it was something along these lines:

Observation The first stage is observation — we observe some phenomenon and try to understand and describe it.

Control Once we understand it better, we can strive to control it in a laboratory, i.e in a controlled environment, on a small scale.

Prototype Once we understand something and can control it, we can think about how to use it for something useful, how to productize it; this leads to the creation of the first prototypes.

Technology Once we can prove that our prototypes are functioning as intended, we can think about how to scale them — mass produce, let people and businesses use it, make them actually useful.

Impact

Once the technology is used by many people, it starts having an impact. It might even lead to some new interesting observations, and the cycle repeats itself.

It’s important to remember that usually each of these stages takes years, if not decades.

I won’t give you concrete examples and tell you were we are with QC right now, but I recommend to compare this model with the development of quantum computing on Wikipedia or with the Hype Cycle.

Cost of reading

In my opinion, there are three main types of cost of reading articles:

  • Time
  • Emotional
  • Cognitive

Time — you need to spend some time to read a given article, that’s simple.

Emotional — if you read some bad news, you feel bad. And our brains work in such a way that if we have two events of the same “objective weight”, the negative one hurts more than the positive brings joy (more on that in Thinking Fast and Slow by Daniel Kahnemann and Fooled by Randomness by N. N. Taleb).

Cognitive — everything you read leaves some trail in your head. If you read a lot of worthless content, you will simply clutter your head.

Tips & tricks

Now that we have all that in mind, it will be easier to understand some strategies which help to select what to read. Keep in mind that I don’t apply the methods below all the time, because:

  • I’m only human
  • It would be too narrow — I leave myself some space for exploration and I’m aware that some of the rules I follow might be wrong. Breaking them from time to time allows me to re-evaluate if I was right in the beginning.

And the methods are:

  • Don’t read news
  • Use the interest criterion
  • Use the utility criterion
  • Follow only credible sources
  • Wait
  • Be vigilant

Don’t read news

Most “news-type” articles are worthless (and I won’t restrict myself to the QC realm here). If you think about the Lindy effect and half-lives of knowledge, it’s easier to understand why. Also, usually just reading the headline gives you the most important part of what the article is about.

I especially dislike news like: “New method of XXX created by scientists from YYY will lead to a breakthrough in quantum computing!”

I don’t doubt the research is interesting and the scientist did a great job. But what impact does it have? If you look at the technology lifecycle I described earlier, we are probably several years away from actually making any use of it. So what’s the point of bearing the cost of reading this article?

Another thing that’s worth mentioning when it comes to news in QC, is that often the press covers some “new discoveries” many months after they have been originally reported (e.g. on arXiv). So if you are not sure if given “breakthrough” is really something of importance, you can search for scientific discourse that happened before the official press release.

Use the interest criterion

Quantum computing has many subdomains and it’s hard to be an expert in all of them. So be aware of what’s interesting for you and focus on that. For example, I’m interested mostly in quantum algorithms and near-term applications of QC. So if I see an article about quantum error correction (QEC) or some mathematical proof that QC will or won’t be able to do this or that, well… It won’t be a top priority for me, since it’s not in my area of interest or expertise.

Honestly, in most cases I won’t even be able to say if this new way of doing QEC is better than the previous one and I have to trust the journalist who wrote it that it’s groundbreaking. I think this criterion also covers reading content in which I can recognize if the information is inaccurate or distorted in some way.

But if there is something that sounds super interesting, and is outside of my usual interests — yeah, I will read that — I’m susceptible to clickbait too ;)

Use the utility criterion

Apart from reading what’s interesting to me, I also look at what might be useful to me. So I omit a lot of introductory or general-audience articles since they often repeat the same stuff. I read only ones that might be valuable to me. If you’re new to the field, then introductory articles probably have higher utility for you than articles about scientific breakthroughs. Just be clear about your priorities :)

Keep in mind that there is some balance between the utility and interest criteria — there might be some topics that don’t really interest me, but they hold a lot of value. As I mentioned — I’m not 100% strict in following the rules.

Follow the right people/sources

My main information sources of news are:

  • QC report — this is a no-brainer. High quality, concise content and I get a newsletter every Sunday.
  • My workplace Slack — content shared in the workplace is meant to be relevant to the company’s mission and all the employees, which means it’s usually interesting to me
  • Fact Based Insight — I really like this website. It has a good balance between business and technical information, covers a wide array of quantum technologies (not only computing) and the materials are of very high quality.
  • LinkedIn
  • Twitter
  • Blogs — honestly I don’t follow any particular blogs about QC. But I probably should find some high-quality blogs and start doing it ;)

When it comes to social media, there are a couple of rules of thumb I follow:

  • I use Facebook mostly for private, not professional discourse, that’s why it’s not included above.
  • For LinkedIn I simply don’t read that much of the newsfeed. Sometimes I will spot something interesting at the top, and I scroll it from time to time. I try to read posts by people who meet the same criteria as in the previous point.
  • I don’t scroll the feed too much — the most valuable articles usually get to the top anyway.

I’m quite picky when it comes to whom I follow on Twitter. The usual criteria:

  • people I know and respect
  • similar interests to me
  • consistently posting high-quality content / comments
  • consistently not posting trash

Wait

Some people might say “But I want to be up-to-date! I fear I will miss important news!” Well, the chances of your missing a big thing are pretty low, since if something significant happens, people will probably share it and it will eventually get on your radar anyway.

Time will tell if something will indeed be of high quality and importance, and then you will probably repeatedly spot it because many people will be sharing it.

And even if you miss it, well… what’s the actual cost? How much will you suffer from not knowing that some company has just performed tests with the quantum key distribution that worked 10 times better than the previous one?

Be vigilant

In QC articles, there are often a lot of simplifications, misinformation and hype. I’m also not without blame — it’s really difficult to be exact, approachable and concise at the same time (see this interview with Scott Aaronson).

There are a couple of ways to decide whether a given article is high or low quality:

  • Look at the discussions below given articles. You can find a good recent example from Twitter. Read this article, this response to it and this incredible thread.
  • Take everything in the non-specialist articles with a pinch of salt.
  • See if it mentions any of the common themes I list in the next section. If so — stay alert!

Common pitfalls and misconceptions

There are a couple of themes that persist through many, many materials I’ve seen:

  • Quantum parallelism
  • Overly pessimistic/optimistic statements
  • Big data
  • Breaking cryptography
  • Optimization problems

Quantum parallelism

People often claim that quantum computers are faster than the classical computers, because they can exploit superposition to calculate things in parallel.

It’s not hard to see where it’s coming from, on the surface it makes sense — I used to repeat that too. This has been explained by Scott Aaronson in the interview I’ve already mentioned or in this comic, so I won’t attempt to explain it again here. Consider the comic a must-read ;)

Overly pessimistic/optimistic statements

There are so many headers and claims: “QC will change the world sooner than you think” or “QC will not have any commercial use in the next 3 decades”.

When it comes to overly optimistic claims — think about the technology cycle — it all takes time. Let’s say we have a prototype of an algorithm running on a real QPU, which solves some problems faster/better than a classical one. It’s an incredible feat in itself, but probably there has been a lot of manual work behind the scenes and it will take a lot of effort to make use of it in a reliable and reproducible way. It will probably require a lot of tests and it’ll be at least about a year until it’s used in practice. And since we don’t have anything like this today, I doubt we’ll be seeing any significant impact on the world in at least 3 years. I hope I’m wrong :)

When it comes to overly pessimistic claims — I agree, the technology faces daunting challenges, at every layer of the stack. However, a couple of things make me hopeful:

  • The number of people trying to solve these problems is still low, which means that we are exploring possible solutions rather slowly.
  • There are many algorithms which work pretty well even though it’s really hard to explain exactly why they do (so called heuristic algorithms), e.g. neural networks. So as the technology matures, we might find some novel ways of exploiting it, without even understanding it at first.
  • Even if some of the pessimistic claims are true (e.g. about how difficult it is to scale), even if we are able to harness just a fraction of the computational power QC offers in theory, it might offer tremendous benefits.

Big data

“Quantum computers will revolutionize big data”! Well… it’s not that simple. When we talk about big data, we usually talk about at least GBs of data, often more. In the foreseeable future we will have imperfect quantum computers with 1,000 qubits at most. It doesn’t look like it can process GBs of data.

There might exist some efficient ways of encoding the data or some clever algorithms which would allow us to process that much data. But I’m not aware of any reasonable proposal on how to do it and I don’t think we are at this stage yet. So when I see someone mention “QC” and “Big data” in one sentence I immediately get suspicious if this person really knows what they’re writing about.

Breaking cryptography

I’m not an expert when it comes to quantum cryptography and there is some controversy around the claims that “QC will break cryptography”. Most of the popular articles which mention this seem like the authors have not done their homework and are not up to date with many of the nuances. I think this article about the ethics of quantum computing covers this topic (and a couple of other topics too) pretty well, so I won’t repeat the arguments stated there.

Optimization problems

I’ve spent some time working on using QC to solve optimization problems, mostly TSP-like (Traveling Salesman). And I went from being enthusiastic about it to appreciating how hard these problems are and how good and reliable our current methods of optimization are.

I don’t think that combinatorial optimization will be one of the first areas where we will see quantum advantage. There are some theoretical results which show that we can get a quadratic speedup at best and not an exponential one (for more details see references section at the bottom). Also, in many areas of optimization getting a solution which is slightly better than the previous one doesn’t present that much business value. At the same time it’s true that it sometimes might yield huge benefits.

With that being said, I don’t want to rule out the possibility of some ingenious hybrid approaches that might help solve some problems faster, sooner than we suspect. Or that there are optimization problems which would be an extremely good fit for QC.

I have just presented some contradictory statements, so to sum it up:

  • Usefulness of QC in optimization at the scale that it’s usually presented is not as clear-cut as it might seem.
  • It’s much clearer that quantum chemistry/simulation will be able to benefit from QC in short/medium scale, it’s not so obvious with the optimization problems.

Closing notes

There are many high-quality articles about QC. Entry level, business-oriented, more technical — articles of all kind. I hope this post will help you select relevant articles, avoid nonsensical ones and filter noise from information much better.

If you have some other methods for filtering signal from noise, please share it with me!

I would like to thank the following people for reading the draft and making this article much better than what I would be able to write only by myself: Amara Katabarwa, Ntwali Bashige, Alejandro Perdomo-Ortiz, Peter Johnson, Yogesh Riyat and (as always and last but not least) Rafał Ociepa.

And if you like this article, consider signing up for the newsletter at the bottom of the page :)

Have a nice day!

Michał

References

I wasn’t sure where does the “quadratic speedup” argument in section “Optimization problems” come from, but fortunately Tom Wong helped me out :)

For traveling salesman, it’s NP-complete, so one can efficiently check a solution. Using this as an oracle, one can Grover search for a square root speedup over brute force. There’s no proof that an exponential speedup is impossible, but it’s regarded as highly unlikely.

And here is a list of links I included in this article:

Disclaimer: The opinions expressed in this article are mine own and do not reflect the view of my employer.