Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Quantum computing’s reproducibility crisis: Majorana fermions (nature.com)
136 points by pseudolus on April 12, 2021 | hide | past | favorite | 53 comments


Brief summary:

Majorana fermions are particles can potentially be created under stringent laboratory conditions. Whether this has been achieved in practice is unclear: many scientific papers purport to show evidence of Majorana particles, but there are other explanations that could explain to the observed data as well. New research is frequently published that claims Majorana production, but most often doesn't even acknowledge potential problems or alternative explanations. These sloppy practices cast doubt over the whole field, despite the large impact Majorana particles could have for quantum computing applications.

We need:

* More stringent data reporting: raw data, full data (not only the small subset supporting the hypothesis)

* More critical evaluation of other explanations for the observed data

* Transparent publication processes, that prevent a paper that was rejected by one journal on scientific grounds appear in another journal unchanged


There was also the problem that analyzing the experimental results are complex to analyze and beyond any one expert. The results need a team to analyze them, and probably a team to properly review the papers.

One thing I'd have HN consider though is that the peer review process was never intended as a sufficiently strong filter to ensure that bad science was never published in the first place.

The point is to get it such that it isn't wasting everyone's time to read it and try to figure out how to attack it and rebut it. The whole broader community of science is supposed to participate in the scientific process, which is what seems to be happening here.

Journals and the peer review process also shouldn't be the sole gatekeeper of the truth either, they do make mistakes in the other direction, rejecting papers incorrectly:

https://en.wikipedia.org/wiki/Fermi%27s_interaction


> One thing I'd have HN consider though is that the peer review process was never intended as a sufficiently strong filter to ensure that bad science was never published in the first place.

In addition, in most fields of science, refereeing is also not meant to guarantee correctness -- even good research can turn out to be wrong, and conversely some mistakes are very instructive. I think it's generally more accurate to view published journal articles as part of an on-going conversation, rather than as a lasting record of scientific truth [0]. This is not to say that scientists should not do the best they can to ascertain correctness, nor that they should not look for alternate explanations. But one should look at published work as what it is -- the best one can conclude after X years of work (whatever X is).

[0] Unfortunately, it's often hard for outsiders to jump into these conversations, i part because journal papers are almost invariably aimed at others who are already know the context. But that's a discussion for another time.


>peer review process was never intended as a sufficiently strong filter to ensure that bad science was never published in the first place.

peer review is basically an artifact of and a business process solely for the purpose of commercial science publishing, ie. it is a commercial product quality control. Peer review pretty much killed scientific debate as it amplifies the dogma and makes questioning of it nearly impossible. Even during Dark Ages people were challenging dogma more deeply and freely (even though doing so carried the risk of being burnt at stake) than the scientists risk doing today. Back then there were public debates, and today we have anonymous peer review instead - how is that for the progress...

>The point is to get it such that it isn't wasting everyone's time to read it

this is what you have your students for (depending on the complexity of the work - seniors and/or PhD students). Like puppies they need something to work their teeth on :) Finding flaws/errors/etc. is a good training, and it makes the students a real, though minor, participants and partners in doing of the actual current science, and being such a lowly grunts they get to focus on finding real issues/errors, fact checking/etc. instead of higher level opining.


> Transparent publication processes, that prevent a paper that was rejected by one journal on scientific grounds appear in another journal unchanged

This is great when reviewers are reviewing properly. But when you run into reviewers that literally don't read some parts of the paper and then object to things already addressed there, it starts backfiring. I don't know how to address this, but I'm thinking maybe making reviewer comments public without necessarily requiring a change to publish elsewhere would tackle both issues? It would seem to encourage both high quality reviews and the addressing of those reviews.


I agree with the sentiment, but I think "branding" a paper that was rejected in any way whatsoever would make things worse on the whole. The ability to try again seems to be an important part of the (admittedly imperfect) peer review process. (Relevant caricature: http://matt.might.net/articles/peer-fortress)


Interesting, yeah, I don't know how to solve it. It's a tough problem. (And that link is hilarious and too accurate, thanks.)


Maybe we need to start putting brown M&Ms into research papers /s

https://www.insider.com/van-halen-brown-m-ms-contract-2016-9


I haven't gotten any M&Ms in a while, but I feel like long ago there were two shades of brown and more recently only one.

Also, whatever happened to blonde Oreos with chocolate filling? I know they existed once, but every time I'm the grocery store I check and they're not one of the dozens of flavors.


I THINK those were labeled Uh oh oreos. I'm almost 99% positive they're still in stores regularly. I live in north east US.


Thanks, this is truly helpful


Yeah I would pay money to have one of these at the top of every article I read.


I'd love to see the back and forth between the reviewers and the authors, there's tons of information and nuance there that goes unpublished.


Damnit, after the mistake of calling the study of computation 'computer science', I had thought we'd avoided the issue with 'quantum computing'. But no, even this term seems to get muddied by things that aren't computation.

There is no reproducibility crisis in quantum computing, there is in experimental quantum physics with quantum *computer* applications.

To give a classical analogy, you could claim there would be a crisis in computer science because electrical engineers struggle with making a specific kind of transistor.


The problem with your analogy is, that we already have functioning hardware while quantum computers are nowhere near being practicable.


I think OP is being needlessly pedantic, but in fairness you can do all sorts without quantum hardware. Making quantum algorithms (shor did not have a quantum computer), quantum complexity theory, etc.


Yes, you can also practice swimming without water.


"[Computer science] is not really about computers -- and it's not about computers in the same sense that physics is not really about particle accelerators, and biology is not about microscopes and Petri dishes...and geometry isn't really about using surveying instruments. Now the reason that we think computer science is about computers is pretty much the same reason that the Egyptians thought geometry was about surveying instruments: when some field is just getting started and you don't really understand it very well, it's very easy to confuse the essence of what you're doing with the tools that you use." -- Hal Abelson (1986)


While this quote was good at its time, I don't think it aged well. It is 35 years later, and I do not think the here called essence has appeared.


When was the last interesting computer science paper you read where the computer code or the instantiation of it onto a physical machine was the interesting part of the paper?


Na. If one follows the comparison with physics: the subject is about what is taught to undergrads (they do not get taught building particle accelerators) but the essence of physics. So if I use the test what do professors believe is the essence of the field, you get something like: http://catalog.mit.edu/degree-charts/computer-science-engine... And the computer science part is:

Computation Structures 12 6.006 Introduction to Algorithms 12

6.009 Fundamentals of Programming 12

6.031 Elements of Software Construction 15

6.033 Computer Systems Engineering (CI-M) 12

6.034 Artificial Intelligence 12 or 6.036 Introduction to Machine Learning

6.045[J] Computability and Complexity Theory 12 or 6.046[J] Design and Analysis of Algorithms

I would say all but (maybe) the last two (one) are about computers (maybe the first one in between). I just picked MIT as a random example, just to illustrate my general impression.


Why do you think the essence of a subject is what is taught in undergrad (doubly so in CS where the emphasis is employability in industry and not research in computer science)

Is the essence of math calculus? That gets shoved down undergrad throats. It is even an important subfield of mathematics. But its hardly the essence.


What school teaches mostly calculus to math undergrads? I mean, let's do the same comparison with math, again MIT: https://math.mit.edu/academics/undergrad/major/course18/pure...

Required Subjects 18.03 or 18.032 (formerly 18.034) (Differential Equations) [sufficiently advanced students may substitute 18.152 or 18.303]

18.100 (Real Analysis)

18.701 (Algebra I)

18.702 (Algebra II)

18.901 (Introduction to Topology)

One of the following three Subjects

18.101 (Analysis and Manifolds)

18.102 (Introduction to Functional Analysis)

18.103 (Fourier Analysis — Theory and Applications)

Personally I think this is a good impression on math of course in research you cannot do what is know already, but I would say, if any of these subjects above would be not already fully developed there would be researched today, so basically the undergrad covers an essential part of what we know (have researched) which I do not think people would say ah come on today one wouldnt research this (if not for the reason of course that the research has been done)


Developing complexity theory of quantum algorithms without a quantum computer is not any more outrageous than developing complexity theory of Turing machines with oracles. The former might a least be built one day. And yet, people produce interesting research on both.

Turns out water is overrated (as far as complexity theory goes).


I don't think that's an entirely fair comparison. You can design algorithms and prove things about them without hardware, and the algorithms and their properties will be valid in and of themselves. You don't get to "swim", but the algorithms will be there. One could argue that "computer science" is the study of the logical processes involved in computation.


Computer Science and Computer Engineering are two different things. Is there any reason that quantum computing wouldn't be defined to include both the physical and theoretical computing?


A bit disheartening that there is a reproducibility crisis not only in psychology (and maybe social sciences in general?), but also physics...

On the other hand, after the breakthroughs in physics in the first half (third?) of the 20th century and the stagnation in the latter half of that century, it seems to me (as a layperson) that the number of anomalies in physics seems to be increasing, so that maybe we'll transition from a period of "normal" science to a scientific revolution again soon (in Kuhnian terms). Exciting!


> A bit disheartening that there is a reproducibility crisis not only in psychology (and maybe social sciences in general?), but also physics...

It seems to be across science in general, though it's better or worse in different communities. ML is definitely struggling with this as a field.


You get what you incentivize. We incentivize quantity of papers and gaming the citation system, and that's going to drive down quality.


We incentivize anything that gets grants approved. It's $$$ all the way.


The problem is not reproducibility, but rather omission of data and details without which it is not apparent that alternative explanations are possible.


AFAIK the crisis in psychology applies generally, but this is just QC? Are there other physics-based reproducibility issues?


One big one comes to mind... nobody else has an LHC. It's not hard to imagine a systematic issue in an experiment producing a wrong "5 sigma" result that isn't caught until the world has an equivalent / higher energy beam to play with.

This particular issue seems to be of a similar nature. You've got a research group who made and tested a device, and nobody else has duplicated that (not sure -- but if there's patents covering their fab, there could be Problems for anybody seeking to reproduce the result).


For very big and expensive projects that have an infrastructure part and a science part, like the LHC’s tunnel vs detectors, we try to avoid this by having multiple, separately-designed detectors at different intersection points. Hence the CMS and ATLAS detectors. Even though they share a beam, the rest of their systematics should be independent. They even unblinded their Higgs results together, to ensure neither used the other’s result as prior knowledge.


I don't know about lhc, but at Hera they had two independent groups running similiar experiments at different sides of the ring without talking to each other. This was exactly to adress the problem you mentioned.


It's the same in LHC there are two experiments that run independently, ATLAS and CMS. For exactly this reason.

The Higgs was discovered by both independently with higher than 5 sigma each. The combined sigma of the 2 experiments was 7-ish if my memory serves correct.

EDIT: My memory serves correct because sqrt(5^2 + 5^2) ~ 7


There are multiple groups at different universities involved in analyzing the LHC results though. Yes, there is only one machine performing the experiments but the data processing effort is truly global: https://wlcg-public.web.cern.ch/tier-centres

I haven't been able to find a simple list of all the universities and research groups working on the data though, but the tools to analyze the data are an open effort too: https://root.cern/about/license/


Very good article. Clear and concise. It pertains not just to this field of study, but to most scientific research. We have seen many of these issues discussed (scientific reporting, publishing) on HN, but this writer summarises a few very good solutions, towards the end of the article. How to achieve reproducibility, establishing shared experimental techniques, editorial ownership, are examples.


When I was in physics, I always read "a typical example of the data from an experimental run is shown in figure 1" as "we have carefully selected the best data, shown in figure 1." It sounds like that is the case here.


Perhaps worryingly the same is true in CS papers also.

I genuinely think there should be a "No code? No Data? No Paper" rule instated from the top.


Ha! Perhaps you have not published a theory paper. I'm on the systems side but theory is real too, even if there is no perf test. Systems papers can benefit from code and data though ;-)


Put it this way, I have read a lot of papers that are basically pure theory (~5-10 pages of mathematics) but with some graphs of how effective the theory of at the end. Again, no code, no error bars, no data etc.


Surprised I haven't seen any comments about Microsoft. This was the qubit type that Microsoft was betting on, I think the lead researcher behind the reproducibility issue worked for them too. Very curious if they are going to pivot to something else, they have also invested heavily in QC startups.


Off-topic: I submitted the exact same article only a few hours before https://news.ycombinator.com/item?id=26777030 ; I thought someone else submitting the same thing would end up being a simple upvote on my submission instead of a separate submission?

I don't care about the points or credit, glad it's on front page. But apparently there's something I don't know about the workings of HN? Thankful for someone clearing that one up and sorry for taking up the space.


That's true for 8 hours, but this one was submitted 9 hours after yours.


Thank you for answering ; good to know.


Perhaps they submitted it before you and then changed the submission (e.g. title)? Just a guess.


I submitted the article in question and didn't make any alterations to either the title or the link. It's possible (just a guess) that the link I submitted was slightly different from the link that the OP submitted as I tend to strip out extraneous parts of the URL. After that the submission was likely then just fortuitously upvoted.


had the same idea, but I do so as well and link is exactly the same.

edit: it doesn't really matter I guess. HN has some mechanisms which aren't public afaik. How downvoting works exactly, how flagging works, who can, what the algorithms are for sorting. This account I'm using hadn't a successful submission before, but it's not that I spam submissions either. Four submissions in total and the account is 8 months old. But I guess it's just some automatic behavior of the site, well... shrug.


Related: Is Quantum Computing bullshit? https://www.wired.com/story/revolt-scientists-say-theyre-sic...

I thought this was pretty hilarious.


"Sick of the hype" is not equivalent to "it is bullshit".


I didn’t make it clear, obviously the entire field isn’t bullshit. The wired article is talking about a Twitter account that judges if some QC news or paper is bullshit or not.


Cutting edge does often cut.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: