Probably the highest point in my physics "career" was when my advisor recommended Quantum Field Theory for the Gifted Amateur [0] to me. It was after I had basically decided to abandon hope at academia and become an engineer, but I was just barely past the threshold of being able to understand the contents. I worked through the book solo and really greatly enjoyed it; highly visual and well paced. I'd recommend it to any undergrad+ who has made it past bra-kets and wants to see how far the rabbit hole goes.
Another great book at that level is Student friendly Quantum Field Theory by Klauber. Especially if you struggle with the incomplete math treatments in standard books like Peskin & Schroeder, where the first chapter basically assumes you already know all the weird complex contour integrals that you usually only encounter in QFT. If Klauber says "from this easily follows ..." then you can expect to understand it even if you're not yet an expert. Ofc that comes with less depth in total, but there's no point in talking about renormalization if you haven't understood field quantization.
My undergrad is in Computer Science and Engineering, but I would love to read and understand this book. Any recommendations for what to know beforehand? Some prerequisite books or subjects, maybe?
It's been a while now since I've picked it up, but I think the main content it assumes your comfortable with is on the order of a semester of Quantum Mechanics. Otherwise, it definitely uses quite a few tricks in calculus (e.g. integrating probability amplitudes, variational calculus, and likely some higher dimensional stuff). The later chapters probably get even more exotic, but the book prepares the reader pretty well I think.
For book recommendations, the ones that come to the top of my mind are:
- Griffiths' Quantum Mechanics [0]. It's become a pretty standard undergrad QM text, and in my experience was very approachable.
- Div, Grad, Curl, and all that by Schehey [1]. I don't remember how much into vector calculus the QFT book got, but this one turned the tide of my undergrad personally. For ~120 pages it gave me a better intuition with 3D calculus than any other resource.
- Something that covers calculus of variations, Euler-Legrange equation, etc. I first covered this in Classical Mechanics but don't remember the textbook. The Feynman Lectures of Physics [2] probably covers it, but I don't know for certain. Incidentally, Feynman is all over QFT, so his undergrad materials are probably excellent prep materials.
I don't remember whether the book introduces bra-kets (Dirac notation) or assumes them, and I don't remember if Griffiths uses it at all. I first saw this notation in General Relativity, with Spacetime and Geometry [3], but I think there are definitely better materials that can explain the notation better.
I'm pretty sure all of these books, and plenty more on these topics, should be widely (or freely) available. Good luck :)
Something that covers calculus of variations, Euler-Legrange equation, etc. I first covered this in Classical Mechanics but don't remember the textbook. The Feynman Lectures of Physics [2] probably covers it, but I don't know for certain.
Caltech is fantastic. They even have the old-form domain name there in your links, similar to Stanford CS department which is also exceptional in the nomenclature, but grandfathered in.
The Feynman lectures have a reputation for being very hit and miss. People who "get it" will find them really interesting and useful. But if you don't, then it might just confuse you.
I know a condensed matter postdoc who told me he felt ready to tackle the Feynman lectures only after he had completed his phd....
They're a great companion book, but you really need a book that guides you through derivations and computations. Some techniques are non-obvious like choosing coordinate systems to make integrals easier, clever contours when applying residue theorem, change of variables using orthogonal matrices to diagonalize symmetric matrices, etc.
I second the sentiment, that working through a book is the absolute best education. I worked through many.
I copied down all the code in ANSI Common Lisp and answered all the questions in the book explicitly, wrote down my answers and everything. Except for the object-oriented part which the author says not to read. Says it was a requirement for the book in the 90s, he didn't want to include it. I didn't read it on his advice. That book also includes cryptic objections to political correctness, in a simulation you're supposed to run about a contest in an alien race that always ends in a tie or near-tie. The simulations the book proposes reveal equality of outcome in these contests is due to manipulation. You don't get even representation from fair contests. Whereas university admissions in general do end up with a lot of "underrepresented minorities"--and this Daniel Cussen saying that, my torturer said that I'm a "minority of one"--but the ties and near-ties are because of fixing. The game is rigged.
But a book! That is an education! Then I attended Stanford and it was like, what is this shit? I gotta cheat to pass, the fuck? If you say I can't cheat and then you say I can cheat in a specific manner, eg asking a TA for all the answers, what do I do? My Chilean logic says, if they say you can't and say you can, you can't. Same rules as Magic: the Gathering, negative prevails over positive--and it's a matter of integrity, there's reasoning for this, both in Chile and in Magic, it makes perfect sense to me. Basically it's conservative. Not "negative" in the pejorative sense.
Hell, weight loss is negative, is it bad to lose fat? Fatness is positive, bigger number on the scale. Why is negativity worse? Further I'm in the Southern Hemisphere, the negative hemisphere, there's a physical definition for its negativity. Is this wrong? Does negative magnetism translate to negative morality, which I truly do recognize as wrong? Manicheanism, yes, right is positive wrong is negative. I basically buy that. But with all physics? And of course technically positive current is incorrectly defined, it should be the other way around, electrons are what moves so they should be positive.
Stanford I guess thinks can prevails over can't, should go over that in orientation. There shouldn't be contradictions like this in the first place, though. Admitted students have to be able to pass all courses without any cheating. I guess that's where I fucked up. Getting myself lobotomized, guess I'm no longer Stanford material despite having already been admitted.
No longer have the brains. Literally no longer having the brains. That's literally it.
But Stanford refused to protect me from lobotomy, I kept telling them how much harm I would go through, crying in my lynching meeting on February 6 2009, begging for a trial and always getting denied, going to every office I could, writing emails nobody gave any replies in writing to, telling my whole dorm I was getting lynched (apparently nobody does this). Stadmin only pushed me further into that system, jammed me further into the meat grinder. Meat grinder. Think of it like a blender, what happens to meat that goes in? Does it retain its shape? No it gets diced right?
Just watched gloatingly as I got put through the system. Watched with relish. Young white man getting lynched. That is the consummation of the Civil Rights Movement, young white man getting lynched. It's not as platonic, abstract and noble as it pretends to be. White is negative. Young white man getting lynched? Justice.
But back to my point: the book. It used to be, before universities monopolized annointing the intelligent as such, that it wasn't about going to the right schools. It was about being well-read. Universities didn't exist before AD 1000, University of Bologna. And even then it started slow.
> There is a mathematical theorem that forbids you from writing down a discrete version of certain quantum field theories.
> (...)
> You know, if you take this theorem at face value, it’s telling us we’re not living in the Matrix. The way you simulate anything on a computer is by first discretizing it and then simulating. And yet there’s a fundamental obstacle seemingly to discretizing the laws of physics as we know it. So we can’t simulate the laws of physics, but it means no one else can either. So if you really buy this theorem, then we’re not living in the Matrix.
I don't buy this reasoning.
I can encode the continuous function f(x) = x^2 on a computer.
Then I can calculate this function for any number up to any digit, if I allocate enough memory.
I don't need to allocate a discrete domain up-front and then stick to it at all times. I can increase and decrease the accuracy as needed.
In a similar fashion I could simulate a continuous universe encoded with continuous operators. I could simulate it on a discrete lattice with certain precision as long as nobody inside the simulation builds equipment that can measure things "in-between" the lattice points. And when somebody does, at that moment, I can simply pause the simulation, calculate the values of my operators using a locally-denser lattice, then unpause. The observer with their equipment wouldn't notice anything because the simulation was paused, they would just get the correct measurement.
I don't think continuous and discrete-with-arbitrary-precision are the same thing, maybe even to the degree where we can't use the latter as a universal approximation for the former. I think there are some cases where approximating a continuous function on a lattice produces an error that isn't a function of precision (I think the Weierstrass function is one).
It really comes down not to the question of equivalency, but distinguishability.
If the universe is lazy evaluating things discretely down to scales well below that of observation, there's probably no damned way to tell the difference.
I think it depends on what is doing the observing. If it's humans or conscious beings or something like that, then the admins could be keeping the precision perpetually out of our reach.
If it's atoms or photons doing the observing, though, they're affected by the gravitational pull of every mass in their light cone. So an atom can distinguish between universes in which another atom 1000ly away moved by 0e or 1e (where e is the distance between two lattice points).
But, it wouldn't have to know until 1000 years later. There does seem to be some computation-resource saving mechanic at play, but I don't think it has to do with discretization.
The theorem mentioned is Nielsen-Ninomiya. A good analogy is aliasing in signal processing. It's not that you can't define a theory and compute it. It's that under certain conditions, you get more than one electron, you will get electrons that behave like electron does but they are different, and they are an artifact of the discretization.
This is similar to aliasing in signal processing. A high frequency signal behaving like low frequency. And there are solutions to this problem, but it's hard to reason which one is the "right" solution.
The point is that it can’t be simulated on any lattice of any density. It doesn’t matter how fine the lattice is compared to the sensitivity of the measurement. For your case you showed, it would be like if x^2 simply didn’t exist on a lattice, and couldn’t be calculated by a computer no matter how much memory you threw at the problem.
Certain parts of QFT work fine in lattice based calculations (lattice-QCD, for instance, where it’s simulated on a certain lattice scale, and then extrapolated to the continuous limit), and some seemingly don’t work at all.
> The point is that it can’t be simulated on any lattice of any density. It doesn’t matter how fine the lattice is compared to the sensitivity of the measurement.
It feels like you want to do sth like: first discretize (choose a lattice), then simulate (do calculations on this lattice).
I want to do the opposite: first simulate (do calculations on a continuous domain), then discretize (restrict my results to a lattice).
> For your case you showed, it would be like if x^2 simply didn’t exist on a lattice, and couldn’t be calculated by a computer no matter how much memory you threw at the problem.
So the functions we're talking about do exist on continuous domains - but they don't have a corresponding definition on a lattice?
Couldn't we embed a lattice in the continuous domain, then restrict the function along the embedding, thus getting a definition on a lattice?
Unless it's not possible to embed the lattice in a continuous domain - then my reasoning breaks.
(note: I know nothing about physics, I'm a programmer with math education, talk to me like I'm an idiot)
I apologize I am actually running out the door at this point, so I can’t reply in a ton of detail (I’ll try to remember to do so later)
But the point to remember is that these aren’t normal algebraic equations, they’re based on the quantum operators, right? And so we can always do symbolic math on them, but to get numeric results, they have to be instantiated at some point. The operators exist at every point in space, and some of the operators can be approximated on a lattice discretization, but some of them cease to be well defined as soon as there is any distance between the operators (so they require true real numbers, not floating point numbers - ergo, infinite memory).
One point that i think is missing is that there’s a bit of a difference between numerical solutions to QFT equations (like the calculation of g-2 referenced in the paper) and lattice calculations in that in general those numerical calculations are giving averaged quantities. We couldn’t, for instance, take that average quantity and use it instead of the dynamically fluctuating quantity in a lattice simulation. We could run a lattice simulation and estimate the value of g-2 from the lattice to see how well our discretization -> continuum extrapolation worked. But we couldn’t go backwards from the numerical solution to the lattice, so to speak.
I guess the (crazy, I know) assumption that I made is that I have some analytical, symbolic expression for a function that describes the state of the universe at every point. This "state" describes some fundamental quantity (not necessarily a quantity we have a name for yet).
Then we express the value of any particle field at every point as a (potentially very complex) symbolic expression that only uses the state function from my previous paragraph.
All of these expressions need only finite memory to store. They describe functions with domain R^n to some co-domain of operators or whatever.
Then I can calculate the value of this complex function at any point with any precision I like, with finite memory, although unbounded - I need to allocate more memory when I want more precision.
Point is, I delay the process of "latticization" (calculating the numerical values at each point of a chosen lattice) to the very end - only then I have to choose how fine-grained my lattice is.
> I guess the (crazy, I know) assumption that I made is that I have some analytical, symbolic expression for a function that describes the state of the universe at every point.
This is the error. All you have is some partial differential equation. It has no known symbolic solution.
I do lattice QCD. It's not that we have a problem because of float/double inexact and limited arithmetic. It's that we have a finite amount of RAM.
So the thing we'd want is not continuous values but continuous registers. Maybe this is possible with some very clever engineering but I'd wager that your computation will develop other problems, such as thermal noise causing problems (whereas digital computers have error correction).
So, lattice QCD is often mentioned as an example of computationally well-defined quantum field theory with some reliable results. Wikipedia talks about "lattice QCD is a way to solve the theory". What does this really mean? What questions did lattice QCD answer? Is it possible to state and solve initial value problem for, say, proton-proton scattering?
AND! I forgot to mention the LQCD calculations of the muon's anomalous magnetic moment, g-2, which is a precision observable currently being studied by an experiment at Fermilab.
This quantity (g-2) is dominated by electrodynamic effects. The QCD contribution is on the scale of 1e-10 compared to the full result. But it is the leading theoretical uncertainty.
There are two ways to get a prediction for the QCD contribution to this process. One is to try to back it out of a variety of data from unrelated experiments. In other words: try to fit it by requiring consistency. The other is a LQCD calculation.
The experimental data-driven fit to the QCD contribution and the LQCD calculation differ significantly. This difference is the difference between the current experiment claiming a sizeable new physics effect (the data-driven approach) and perfect agreement with the Standard Model (LQCD calculation).
"Solve the theory" is bit of a wish-washy expression. It depends on what you are interested in. What it really means is that it provides an approximation-free prescription for extracting observables from the QCD action.
Given some experimentally-measured observables (the pion mass, the kaon mass, etc. as long as you include 1 observable per parameter in QCD, meaning 1 gauge coupling + 1 per quark flavor you include in your simulation) you can calculate all other quantities---in a world that is pure QCD. We are now getting to an era where we can do lattice QCD+QED, but we cannot do lattice-standard-model because of the issue with chiral gauge theories mentioned in the article. However, since the weak sector is perturbative, we can combine pen-and-paper/Feynman-diagram-style calculations with LQCD calculations to get what we want: first-principles Standard Model predictions.
To discover new physics you need both an experimental signal and a precision prediction from the Standard Model. If they differ that difference is something not in the Standard Model.
What kinds of quantities can we calculate? The first major success was a calculation of the hadronic spectrum [spectrum]. We've also seen precision determinations of the QCD phase diagram (vacuum properties as a function of temperature) [phase diagram] and other properties that are important for the early universe, although there is a computational resources / scaling problem for nonzero density. With QED included there's a calculation of the proton-neutron mass splitting [mass splitting]. In terms of hadronic matrix elements we can compute the axial coupling of the neutron [gA], neutral kaon mixing, matrix elements needed for nuclear calculations of neutrinoless double beta decay [0vbb], all sorts of stuff. The FLAG review [flag] collects and evaluates individual results and tries to construct a community consensus for our "best values" for certain observables.
> Is it possible to state and solve initial value problem for, say, proton-proton scattering?
You picked one of the most difficult problems in lattice QCD (and one I spent a lot of time on as a postdoc): extracting baryon-baryon scattering. At high energy actually there is a lot of progress because QCD factorizes and you get a parton picture [PDFs]. But at low energy the situation is worse. There are a few issues:
1. We calculate in a finite box, typically with periodic boundary conditions (think PacMan). So there is no way to make protons "asymptotically far apart", and once they're heading apart, they're heading towards their next encounter. This conceptual difficulty was solved in the late 1980s: what you can do is take the energy of the finite-volume standing waves and turn that into phase shifts (quantum-mechanical scattering data).
2. Our calculations are done in Euclidean time, for reasons I have explained / touched on on hn before
So we don't have real Minkowski time; you can think: instead of things evolving like exp(i H t) things evolve like exp(- H t), where H is the Hamiltonian.
3. Baryons suffer from a signal-to-noise issue, where the longer we look in time the worse the variance gets. The source of this issue is understood and we have variational methods to get reliable signals earlier in time. The signal-to-noise problem is easier at heavier quark (or pion) masses (which we, as computational physicists can change even though experimentalists cannot).
For a long time we sharpened our tools at ~800 MeV pion masses. Current state of the art is down around 300 MeV.
This sounds dismal but I want to say: we have the whole pipeline working end-to-end. There are phenomenal results in the meson sector [meson scattering]. There's very-close-to-physical-parameters meson-baryon scattering [n-pi].
SO: is it possible to solve the initial value problem for proton-proton scattering using LQCD?
Not yet but work is in progress.
What these calculations give are phase shifts (and inelasticities) as a function of scattering momentum.
Once these are under control we can try to get a handle on the three-body sector; the three-body force is needed for accurate calculations in nuclear physics.
Honestly I’ve never considered an analog computer, and I’m a physicist but not a lattice expert. While an analog computer would let you use real numbers, you’d still need a way to store state at every point in space, which would lead to the infinite memory problem (and you’d need infinite compute to operate on the infinite memory). Perhaps there’s a clever way to get around that, but my suspicion is that if it were possible someone would have done it already.
A Turing machine can perform all discrete computations with a single bit. An idealized analog computer can compute anything with a "single" complex plane.
Our (mis)conception of there being an "infinite memory problem" when the complex plane is infinite is analogous to flatlanders getting hung up about a "discrete bit memory problem."
It's the current state of the art anyway. But also it's worth mentioning it's more complex than just a function x^2. When looking at a Feynman Path Integral you integrate an Operator (mapping a function to a function) over a an uncountable domain of functions. The other approach which is used to calculate probabilities goes with an approximation that cannot be used for simulations though. All this is not mathematically bullet-proof as Mechanics also speaking about Wightman axioms. (And even in Mechanics it's possible to construct infinities)
You cannot have arbitrary small precision past a certain point no matter how much memory you allocate.
You will run into this limit and it's a physical hard limit. Lookup why it's called quantum (mechanics, physics, field theory, etc).
Before you simulate an entire universe try accurately simulating one atom. Let me know how it goes (spoiler alert: the computing power needed to do this is beyond the computing power we have today)
> The theorem is called the Nielsen-Ninomiya theorem. Among the class of quantum field theories that you cannot discretize is the one that describes our universe, the Standard Model.
This transcript doesn't mention it specifically but the most accurate prediction they're referring to is likely the anonalous magnetic dipole moment of an electron [1] where theory matches experimental value to at least 10 significant digits. The article talks about 12 decimal places so maybe they're referring to somethign else?
At the other end of the spectrum is the s-called vacuum catastrophe [2] where the predicted and actual values for the energy density of a vacuum diverge by as many as 120 orders of magnitude.
As for the incompleteness of QFT, this is well beyond my knowledge. I really wish this were an article instead of a transcript though. Transcripts are such poor means of conveying information.
One sleight of hand is to write down g instead of g-2, in which case you pick up a couple more digits but haven't added any useful information because the value of 2 is already known.
yes and no. physical theories always have a domain of validity where their prediction make sense and QFT and QM do not give an absolute value of energy (density). sure you can do QFT on curved spacetime (Hawking had some success w/ it) but comparing the QFT vacuum energy with the cosmological constant is sloppy at best.
And then what about fluids? Are there other descriptions for things that look like that?
TIL superfluids have zero viscosity; and at that scale, galaxies are supposably superfluidic; at least one formulation of "superfluid quantum gravity" has Bernoulli's && GR and it supposably works.
"And cells, in turn, are made of molecules and molecules are made of atoms. Dig even deeper and pretty soon you’ll find yourself at the level of electrons and quarks. These are the particles that have traditionally been considered to be the end of the line, the fundamental building blocks of matter."
To me, the so-called "the end of the line" is a philosophical question not a physics question. For the simple reason that, we can never know, if something is really indivisible, or if we are unable to divide it because of our insufficient technology. We can never know this. The story of physics makes this clear. Each generation of physicists with new techology available to them take pride in showing that the previous generetion was lying, and that it is their atoms which are the real "end of the road". Then comes the next generation with a better technology...
I would love to hear any critical take on David Tong's excellent video here:
https://www.youtube.com/watch?v=zNVQfWC_evg
"Quantum Fields: The Real Building Blocks of the Universe"
I'm interested to talk to others who want to think about QFT past the maths. I had my own stab at making sense of related ontological implications here:
http://www.katabane.com/mt/ontology.html#back8
I really detest this kind of peppy proselytizing lecture. It is about mainstream science, but the poor form and lack of respect for the listener is horrible.
He is wooing the public a lot with a priest-like confidence on what the world is made of, look how amazing our 12 digit results are, everything is fields, even particles are really fields, even your bodies are fields, we can't really calculate this, it's so hard, but trust us. Matter, universe, big-bang, black holes, Faraday, Maxwell, Einstein, big scary equation, the single greatest equation, bombard and overwhelm the listeners with lots of astonishing statements and make them believe that we are really smart.
No experimental rationale for the reality of quantum fields; no mention of the opposing viewpoints of experts such as Schwinger (his source theory), also of large part of particle physics experimenters who tend to think that particles like electrons are real and quantum fields are really just tools that exist in our brains and on paper to predict what happens to these particles.
I am afraid this is not teaching people anything of value about physics. It's more about what this group of physicists (high energy theorists) are occupying themselves with.
I do hate this stuff, same with Sabine and Carlos, the latter giving me a huge number of lay people that religiously believe in MWI.
However, I would say that most physicists do believe that quantum fields are the most basic building block of our world, so this isn’t some kind of pop science mess or even a mischaracterisation. Could it be the religion of physicists? Maybe. But it is a widespread one among experts.
Physicists generally consider their models to be real rather than just a mental idea that produces the correct results. This kind of thing was controversial in quantum theory until Bells theorem proved that quantum mechanics says things about reality itself (that it is either non local or non deterministic) and PBR theorem proved that the wavefunction is in some sense ontologically real. Now it is less controversial to say that quantum fields are real and the building block of reality
This is an enormous area of study - one that I have followed for a few decades now. The issue comes down to the 'ontological commitment'.
Scientists have grown less and less willing to make the ontological commitment. These days, if I ever see it, it shocks me.
That's why I'm so interested in this video. What has changed that would make a physicist so confident as to make these kinds of career-breaking ontological commitments?
The most excellent book I have read on the topic, and I highly recommend this to everyone on this thread: Arthur Fine "The Shaky Game: Einstein's Realism and the Quantum Theory". I would love to hear suggestions for books to read as follow-up to this.
> Scientists have grown less and less willing to make the ontological commitment. These days, if I ever see it, it shocks me.
I actually think it's the opposite. The PBR theorem even says that the wavefunction is ontological in some sense. People are more willing to make ontological commitments now than they were in the early days of quantum mechanics.
> career-breaking ontological commitments
It isn't career breaking at all. My boss is literally publishing a paper about how photons ontologically do not exist. Bell's inequalities and things like non-complementarity have a much stronger hold in people's minds due to modern tabletop uses of things like quantum correlations and EPR entanglement.
I basically disagree with the premise of your post. I am a postdoc in quantum optics, specialising in high-precision measurements and quantum measurement in general, and I think people are more willing than ever to choose ontological positions on QM.
I am very interested to learn more about any physicist who is willing to make ontological commitments. Ultimately, such a commitment is to realism - what is the epistemological support?
I strongly doubt the myriad complexities here have been resolved. Or are you proposing a kind of 'natural ontological attitude'?
There isn't an "epistemological support", it's all philosophical arguments. But physicists are nowadays willing to engage in philosophical arguments: your long wait is over. I went to a conference called Emergent Quantum Mechanics (EmQM) at the start of my PhD and it was almost all a semi-philosophical inquiry. Literally the chosen topic of the conference was "towards an ontology of quantum mechanics"
If you think that physicists in general aren't working on these things you are probably confined within one subfield of physics. There are definitely thousands of physicists who spend a lot of time reading and writing papers on these kind of ontological topics
If there is no epistemic support, how is this different from belief? And if it is belief, how is that epistemologically more or less sound than any of the world's other belief systems? How does this jive with Wittgenstein? ("Whereof one cannot speak, thereof one must be silent"). How do we reconcile Effie's (excellent) comment above? And a boat-load of other questions besides...
It's one thing to say, we're pretty sure A = B. It is another thing to say, we have definite knowledge that A = B. The epistemologers can be vicious gatekeepers!
To be clear, I want nothing more than to see a break in the log-jam of 20th-21st century epistemology. In fact, I am fighting for it. But I'm extremely skeptical that there is (or can be) any progress here, without a drastic and new approach. I am delighted to see brave physicists making ontological commitments - perhaps they are not aware of the tangled web that awaits them.
This is why Fine's book is called "The Shaky Game". Epistemological grounding for claims in physics is... shaky at best. I am not satisfied with Fine's concession - the NOA (natural ontological attitude). I believe that only a drastic solution will resolve it.
Please, please, please read Fine's book. It lays out the central problems here much better than I can hope to do, and is an pleasant and engaging read. And please stay in touch here and tell me what you think! I am practically obsessed with this topic, but I am not an academic, and have little time to research it.
The central binary here is 'realism' and 'irrealism'. In the simplest reduction, on one side, people say, 'this is what is REAL'; on the other, people say, 'prove it... psst, you can't'. Einstein was a realist, and was extensively ridiculed for it. I wouldn't be surprised if there are no actual ontological realists in physics today, though it is my sincere hope that there may be a few brave souls out there. I suspect that if you engage Tong (e.g.) in the deeper parts of this topic, he would eventually concede to a natural ontological attitude, and back off of the commitment.
> If there is no epistemic support, how is this different from belief?
It isn't that different from belief. It just seems better because it is based on a physical theory. It isn't about what is proven empirically but about what feels more likely based on the current empirical proof.
> "Whereof one cannot speak, thereof one must be silent"
I don't agree with that, and I don't think most people do.
> I am delighted to see brave physicists making ontological commitments - perhaps they are not aware of the tangled web that awaits them.
Remember that almost all physicists have no philosophy education, including those making ontological commitments.
> Please, please, please read Fine's book.
I probably won't, sorry. I am reading Marx and Hegel right now. Maybe in a decade or two.
> Einstein was a realist, and was extensively ridiculed for it.
I don't think so. A lot of people still think like Einstein or admire his commitment to a complete theory of reality. That's why we still have a lot of people interested in Bohmian mechanics.
> I wouldn't be surprised if there are no actual ontological realists in physics today
Most physicists are taught the Copenhagen interpretation with the first principle stated as "the wavefunction is the full physical state of the system".
> natural ontological attitude
I obviously don't know what that is
I think you are projecting a sense of philosophical expertise that is absolutely not present in modern physics. Again remember that almost all of us have literally zero training in any kind of philosophy. I am from the UK system, in the UK we do not study philosophy in high school, and at university we literally only do lectures that are directly related to our "major". I never did a non-physics lecture. The philosophical viewpoints of physicists is entirely based on a mixture of their pre-existing social conditions (whether or not they are religious) combined with their gut reaction to the physics.
Thank you for your reply here. You've nailed it - there is an ever-widening gap between physics and philosophy, which is concerning to me for various reasons. In the end, I feel it is holding things back, but perhaps we have entered a new era.
A few points, with the hope that they are helpful, but bear in mind that I am also not an expert:
1. I too, disagree with Wittgenstein, though his statement is mostly logical in context. Disagreeing here is tricky.
2. Einstein was definitely a well-documented realist, and the debate is central to what I'm trying to get at here. I tend to defend him, though I suspect I am nearly alone among 'philosophers'. I think everyone else has jumped from what they see as the sinking ship of 'realism'. I suspect it is fully sunk at this point, except for a few die-hard realists out there. (Are there any left?)
3. The "natural ontological attitude" is a realist concession. It's basically what you are talking about, with some complexities for the interested. I like it, but it doesn't resolve anything, it just says, 'this epistemological nit-picking is annoying - I'm just saying that "it just seems better because it is based on a physical theory..."
My apologies for being pretentious or presumptuous about philosophical concepts. I just wish there were more open discussion of where these spheres overlap. Again, thank you for your critical thoughts here.
And truly, read the Fine book in a decade or whenever. It's amazing.
I will try and remember when it is published. The argument is quite strong though in my opinion, basically that photons are simply Fourier-limited modes, so they always deposit energy over a time window according to the uncertainty relation df x dt = pi/4. Also this means they really don't exist as point particles
EDIT: Also it of course references Lamb's paper, which I don't think was badly received afaik, just not really widely considered
> they always deposit energy over a time window ... Also this means they really don't exist as point particles
I am afraid this is already the orthodox view, despite the suggestive/misleading pictures used when teaching particle physics (e.g. Compton's scattering).
But even the well-informed photon concept where a photon isn't a point, isn't a plane wave, and usually there is indefinite number of them, is quite horrible. I hope some day the status quo will move where we realize that both the hand wavy imprecise photon talk in textbooks/pop-sci books and the sophist "just a figure of speech when we're talking about quantum field" will be abandoned in favor of more accurate technical terms.
> most physicists do believe that quantum fields are the most basic building block of our world
Do they? Maybe most theoretical physicists, because that's what they work with, they often are platonists and there is no better model. I doubt the rest, in particular experimental physicists, have even solid one or the other way opinion on this. Ontology is too much philosophy and too little physics.
Agreed, ontology is philosophy, not physics. It asks, what kinds of things exist? It straddles the line of the empirical and the conceptual, asking, do 'facts' exist, and can there be 'moral facts'? Here be dragons, and epistemology is the treasure.
Physics reports observations of experiments and theoretical work, but wisely leaves epistemology to philosophers - a hard learned lesson. Einstein was ridiculed for his ontological realism, and the trend has been in the opposite direction ever since.
I am interested in brave physicists who have stood up to the epistemologers, and are willing to make ontological commitments. In my understanding, historically, this attitude poses risks to one's career in physics.
They do. Just because it's philosophy and not physics doesn't mean that they don't have a philosophical opinion on it. I agree it's not physics, doesn't mean people don't believe in it
I suspect it's an approximation of much lower level deterministic processes that we don't understand yet.
One serious issue is renormalization, a crude mathematical hack that nobody likes but has turned out to be a very useful workaround to our limited knowledge.
Renormalization hasn't been a conceptual problem since Ken Wilson's work in the 1970s. Before that, it looked like a mathematical hack (rather like linear algebra without the geometric interpretation) and a lot of physicists who came of age in the postwar period never got over this.
Pretty sure it's been proven that no deterministic theory can reproduce all the predictions of quantum mechanics. That's the essence of Bell's theorem isn't it?
> Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles were able to interact instantaneously no matter how widely the two particles are separated.
It's true that with our current theories, instantaneous action at a distance does not seem possible. Yet quantum experiments (wave function collapse over large distances) consistently suggest this might be possible at macro (non-quantum) scales. I think our knowledge is incomplete on this topic.
Gravity is another area where I don't think non-local effects have been fully ruled out. Gravitational waves (energy emitted/thrown off by certain mass/momentum configurations) have been proven to travel at or near the speed of light, but the propagation speed of the first order gravitational effects an open question. This is an area I would love to see more experimentation done. To simultaneously measure the gravitational and optical positions of the Moon or Sun for example.
Wolfram speculated on this idea: if spacetime is a graph, it may approximate the 3+1 topology on average, but in some places two distant nodes can be connected by a direct link. I'd speculate further that such isolated links may be enough to entangle low-level quantum properties of two particles, but not enough to be a tunnel even for a single photon.
Is the assumption that a signal needs to travel and that the speed of light is the limiting factor? My thought was that they're interacting in some other fashion, such that they are physically connected to each other. If that were the case, it would be instantaneous
Interaction via a hidden physical connection would only be instantaneous if that connection were zero length and only apparently faster than the speed of light when the connection is shorter than the shortest apparent path between the two points.
From the viewpoint of an electron, the moment it is emitted from an antenna as a photon and the moment it is absorbed in another antenna as an electron are the same moment i.e. the photon travels zero distance in zero time.
Somehow our communications equipment still maintains causality.
What? I never claimed that. Radio waves are electromagnetic radiation just like visible light.
Radio amplifiers pump electrons into the antenna, which transmits them as photons, and the receiving antenna absorbs them, turning them into electrons, the flow of which then becomes the audible radio signal which gets (demodulated and) amplified by the receiver.
The photons do not travel any distance and they do so in zero time. This can be mathematically proven.
”The closer you get to light speed, the less time you experience and the shorter a distance you experience. You may recall that these numbers begin to approach zero. According to relativity, mass can never move through the Universe at light speed. Mass will increase to infinity, and the amount of energy required to move it any faster will also be infinite. But for light itself, which is already moving at light speed… You guessed it, the photons reach zero distance and zero time.
Photons can take hundreds of thousands of years to travel from the core of the Sun until they reach the surface and fly off into space. And yet, that final journey, that could take it billions of light years across space, was no different from jumping from atom to atom.”
Under your conception, how does RADAR work? Consider bouncing a RADAR signal off two objects at different distances (1 m, 10 m; or the Moon and Venus [1] when the two are very close to each other in the early evening or early morning sky). Do the RADAR reflections return instantly, or at the same time, or at different times, according to the wristwatch of the RADAR operator, or the RADAR apparatus's computer clock? How does that work with your comment, "the photons do not travel any distance and they do so in zero time. This can be mathematically proven."?
Can we agree that each of the RADAR and the two RADAR-reflecting objects definitely occupy different points in space?
From the frame of reference of a transmitted photon, the transmission is instantaneous, and the spatial distance traveled is zero.
From our perspective, it is a different story. If you are having difficulty grasping this, perhaps you are thinking that photons are like tennis balls but with a very high velocity. Perhaps you even entertain the sci-fi notion of faster than light travel.
However, speed of light in a vacuum is a much more fundamental limit than an inherent velocity limit of some particle.
It is essentially the default propagation speed of all physical information. Interaction of light with matter causes absorption and re-emission of photons, which causes light to both slow down in a medium and pick up some information about whatever it went through.
I don't think I am. Compare my longer comment in this thread <https://news.ycombinator.com/item?id=32462046> which makes reference to Wald's standard textbook. It's not just Wald - any standard textbook will agree. I can give you chapter references for practically any of them if you like.
What I was wondering was how you reconcile the difference in time at a RADAR station when receiving photons bounced off targets at different spatial distances with, quoting you, "the photons do not travel any distance and they do so in zero time".
Surely time passes at the RADAR transceiver? Otherwise, wouldn't we get instant returns back from the Moon and Venus, instead of having to wait seconds to minutes?
A broader question might be, "is there a 'best' clock for analyzing a RADAR detection, and if so, which clock?".
Finally, you might want to wonder if an in-flight photon changes spin states or frequency. Doesn't a RADAR photon bouncing off a moving target (or returning to a moving transceiver) undergo a doppler shift, and thus change of frequency? How do police RADAR guns work if photons do not feel time?
It strikes me that you didn't care to answer my earlier RADAR questions at all among your five paragraphs in reply. If you don't care to do so, or you don't want think about any of this, that's fine. But I think you should take care in judging how much or how little anyone on hackernews might know about relativity, or what ideas about sci-fi-like-concepts they might entertain.
I believe the gp is correct, the closer you get to the speed of light the slower time passes. If a particle travels at the speed of light, no time passes for it.
You're part-way there. First I'll explain how it works in standard textbook relativity, then I'll reword your first sentence a bit.
p != p' are two points on a 3+1 dimensional Lorentzian manifold (M, g), modelling our spacetime.
There is a minimizing geodesic from p to p', the "shortest straight-line path".
In a Lorentzian spacetime, geodesics are classified into three groups: spacelike, null, and timelike. A null geodesic describes the freefall of the massless relativistic wave equation, which applies to e.g. photons (as the Maxwell equations using the Lorenz gauge); and a timelike geodesic describes the massive relativistic wave equation, which applies to e.g. neutrinos (as the Dirac equation) or the Higgs boson (as the Klein-Gordon equation). In a particle paradigm, photons move at c freely falling on null geodesics; neutrinos, electrons and so forth move at less than c freely falling (experiencing no accelerating force) on timelike geodesics.
If p and p' are timelike-separated, they can be connected by a timelike geodesic (e.g., a path that could be taken by a massive particle), then we can calculate a nonzero spacetime interval S in some coordinate basis, and divide by c, giving us the proper time \tau such that d\tau = dS / c in a patch of flat spacetime (the metric g enters into the definition through general curved spacetime: on path P, \Delta \tau = \int_{P} 1/c \sqrt{g_{\mu\nu} dx^\mu dx^\nu (= \int_{P} d\tau), which is invariant under changes in coordinates, and has the same problem with dividing by c.
However, if spacetime points p and p'are lightlike-separated, they cannot be connected by a timelike geodesic, but rather by a null one. The spacetime interval of a lightlike geodesic is defined such that dS = 0 (thus the name "null").
Any path through spacetime can be parametrized by applying arbitrary labels to its points. A geodesic is a species of path. We can label any geodesic from A to B any way we like: Apple Zebra Xray 3.1415 1 86 Xray-again Manitoba Xray Xray-again ... 911. Arbitrary labellings are not widely useful, though, so we will probably want to label every point with a unique value in such a way that there is a total ordering from A to B, and even more ideally so that there is some smooth monotonically non-decreasing function f on the geodesic, mapping all its points to unique values of the parameter.
Proper time is just such a useful function on a timelike geodesic. It is not the only such function.
On a null geodesic, proper time would give us a labelling 0 0 0 0 0 ... 0, which is obviously not useful, as the 0s aren't unique or ordered. However, we can use any monotonic function, setting out labels at every point along the path from p to p' where we would (if we measured) detect the photon.
A good choice for a null geodesic is the affine parameter, as that preserves the photon's tangent vector under parallel transport along the path. There is a unique affine parametrization satisfying the geodesic equation for every geodesic, including every null geodesic.
One of the properties of the affine parameter on a null geodesic is that we can define a momentum k^{\mu} = \dot{X}^{\mu}, the dot representing the derivative with respect to the affine parameter -- the momentum k can be calculated at every point on the null geodesic, and in a curved spacetime k will differ from point to point. This difference in momentum is exactly the gravitational redshift.
So, it might be better to say: proper time is an unsuitable parameter for a photon in free fall, or for labelling points on its path from point A to point B. Instead we can use affine time (== the affine parameter) for a photon and for its path from point A to point B, and see that there is some definite timelike parameter for a photon on its journey that can [a] show where in spacetime it can be found during that journey and [b] what its momentum is at [a] and thus what its frequency, wavelength or energy is at [a] via the E = h{\nu} = \frac{hc}{\lambda} relation, and how those quantities change along the path in the presence of spacetime curvature.
I would not say "no time passes for it", because we can always define such a time. Affine time is one such (useful) definition, but we can use any others we like. Although a proper time can be calculated for such a massless particle, the proper time is 0 everywhere, so not very useful. However that does not strictly speaking mean that there is no passage of time from the point of view of the photon. It just needs to "wear a different sort of wristwatch" than an electron might choose to. However there is an affine parameterization for an electron moving between points I and J, so electrons (and neutrinos and other non-massless waves, particles, or objects) can always "wear the same sort of wristwatch" as a photon. (Some mathematical details in Wald's graduate-level textbook _General Relativity_ §3.3 and see ch. 3, problem 5).
Electrons are not transformed into photons; that would violate conservation of lepton number. Rather, an electron is accelerated by the antenna and as it accelerates it emits a photon. The electron still exists with the antenna afterward, with a lower energy.
I've read somewhere that photons, while travelling, casually turn into electron-positron pairs and back to photons. It's quite possible that both are just distortions in the magnetic field.
The two explanations or interpretations I’ve come across that recover determinism are many worlds interpretation and superdeterminism. I don’t think either are testable with current technology so it’s not a very satisfying or experimentally useful explanation. At least not yet.
No. Bell's theorem states that local hidden variable theory obeys Bell's inequalities. Quantum theory models and some experiments with light violate those inequalities. At best, we can conclude that any deterministic theory supplanting quantum theory can't be Bell-local, or that it violates some other assumption of Bell's theorem (e.g. independence of actions at two separated experimental sites).
Basically that non-local hidden variables and superdeterminism do not seem like likely solutions to the violation of Bell's inequalities compared to just accepting that nature is random. You have to do a fair amount of reaching in order to hang onto the notion that the universe is deterministic
Except that we know entanglement is non-local: this suggests that non-locality rather than non-determinism is the correct resolution to Bell’s inequalities.
Non-determinism is an extraneous assumption if we’re forced to accept non-locality for other reasons.
Edit:
I’ve hit my posting cap, so replying in edit —
If we know particles are entangled and subject to non-local effects, we’re already concluding that there’s a hidden non-local variable: the sum of all entanglements we don’t know about.
You can model that as a randomness distribution for calculations, but that doesn’t make the universe non-deterministic.
Edit 2:
I think where we diverge is in the belief you’re measuring “nothing” and so it’s strange you’re getting slightly varying results. I’ll have to think about that aspect some more.
I’ll leave off here with a big thanks for giving such detailed replies!
(replying to the edit) It's still just very unlikely. My work is based around reducing the fundamental quantum noise in gravitational wave detectors. This is the noise that arises from the following process: if you set up many copies of an experiment where you measure the electric field E in the vacuum state |0> (no photons) you will get a spread of results around E = 0. This is actually the largest noise affecting advanced gravitational wave detectors such as LIGO at their most sensitive frequencies (the classical noise sources have been reduced so much).
If I have a situation where I take the quantum vacuum and get a different, random result every time I measure it, it seems like quite a stretch to conclude that it is _really_ deterministic.
Further, if quantum states are described by wave functions then they must be intrinsically random due to the mathematics of the Fourier transform.
> If I have a situation where I take the quantum vacuum and get a different, random result every time I measure it, it seems like quite a stretch to conclude that it is _really_ deterministic.
If you take number generator and get a different random result every time you call it, you have no idea whether it is deterministic (algorithmic) or not. One can try to do statistical tests, but these can reveal only weak algorithms, not the sufficiently good ones.
> Further, if quantum states are described by wave functions then they must be intrinsically random due to the mathematics of the Fourier transform.
That is a very strange statement. Which part of mathematics of the Fourier transform suggest wave functions are intrinsically random?
If I was given a random number generator, _and_ there was no way to check if it were deterministic by experiment, I would lean on the side of not constructing a mechanism to justify that it is deterministic. Unless there is an experiment to prove otherwise, we should just accept that it plainly presents itself to us as random.
The mathematics of the Fourier transform enforces that the frequency spread times the time spread is equal to a constant. Therefore if the quantum state of a system is fully described by its wavefunction, then the system absolutely cannot have predetermined values of conjugate variables. It takes many more wavenumbers to describe a wavefunction well localised in space.
> Therefore if the quantum state of a system is fully described by its wavefunction
Big if. Real systems interact with their environment, which means every psi calculation is approximate, as there is no limit to the possible expansion of domain the wave function is defined on. If we want full description, we end up with the necessity to work with wave function of the whole universe. This means that argument for non-existence of simultaneous values of conjugates does not work. This was after all analyzed by EPR and Bell and the result is, conjugate pairs may simultaneously exist in non-local theories.
The PBR theorem proved that there cannot be two different quantum wavefunctions that correspond to the same underlying physical state https://en.wikipedia.org/wiki/PBR_theorem
>_and_ there was no way to check if it were deterministic by experiment, I would lean on the side of not constructing a mechanism to justify that it is deterministic.
This is the core of your disagreement. You suppose Occam's razor dictates something appearing unfalsifiably random is in-fact random. The other side supposes that you're the one making a stretch, since everything which can be falsified categorically turns out to be deterministic.
> everything which can be falsified categorically turns out to be deterministic.
I've never heard of this. Why do you think this is true? Is that because QM is our first example of inherent randomness in physics and all previous physics was deterministic? I don't understand the argument you are pointing out
> Is that because... all previous physics was deterministic?
No, it's a purely logical argument. Randomness is fundamentally unprovable/unfalsifiable. Therefore, in principle, any possible process in any imaginable system can only ultimately be found to be either provably deterministic or unfalsifiable.
On the other hand, it is possible for there to be unfalsifiable structures which determine the outcome of apparently random processes which are not truly random. In fact, it's trivial to create practical cases of such sources of psudorandomness.
You believe that positing the existence of one such structure to be the bigger leap simply because it may be not just practically unfalsifiable but in-principle unfalsifiable (though that is undetermined).
Except non-local deterministic theories are much more complicated and almost impossible to use. That's why a lot of physicists regard the result as showing that reality is both non-local and non-deterministic. Very few suspect a non-local hidden variables theory.
I think that non-determinism would be an extraneous assumption given the fundamental randomness apparent in quantum measurements. In my opinion, hidden variables are something extra to add to take account of this, and indeed it's very difficult to use Bohmian mechanics with relativity etc.
There's a very good book on quantum physics, published in 1978, by PCW Davies, called 'The Forces of Nature', which discusses most of this in nice detail. One gets the sense that not a whole lot of new physical theories have been confirmed since this book was published.
For example, the renormalization problem:
> 'Fortunately the problem of infinite self-energy can be overcome... Manipulating infinte quantities requires some mathematical care, but it can be proved that all <observable> quantities are finite. The technique, developed in the 1930s and 1940s, of absorbing infinities into unobservable 'bare' quantities to get a finite answer, is called renormalization. It may appear like a trick, and nobody pretends that it is completely satisfactory, but without renormalization the predictive power of quantum electrodynamics would disintegrate. With it, the answers obtained have the legendary accuracy already described.'
According to this book, the problems with renormalization are much more severe with weak interactions:
> 'The renormalizability of QED can be traced directly to the masslessness of the photon. Like all massless particles that spin, it can direct its spin either parallel or antiparallel to its direction of motion, but not in between as well. In contrast the massive W can align its spin in three different directions, for example, parallel, antiparallel and perpendicular to its motion. This seemingly innocuous property is the cause of all the difficulty, because it turns out it is the W particles with the perpendicular spin directions that prevent the infinities from being renormalized away."
> "These observations suggest that if the W particle were massless, it might be possible to construct a unified renormalizable theory of weak and electromagnetic interactions in which the photon and W are combined, like the hadrons, into a single family."
Notably there's no mention of the Higgs boson in this book, which is nevertheless remarkable in how it covers the background of almost every physics story I've come across for years. However, is this basically the entry point to why the Higgs boson is important? For example:
> "In the Standard Model, the W± and Z0 bosons, and the photon, are produced through the spontaneous symmetry breaking of the electroweak symmetry SU(2) × U(1)Y to U(1)em, effected by the Higgs mechanism (see also Higgs boson), an elaborate quantum field theoretic phenomenon that "spontaneously" alters the realization of the symmetry and rearranges degrees of freedom."
Was this said prior to the formulation of modern QFT? I imagine Tesla would agree that QFT is the thing to be done to add waviness to the particular questions we humans like to ask, although he didn't exactly believe in the concept of electron, maybe he would see its formulation in QFT and say yes?
I think Tesla would still object to seeing quantum phenomena through such a classical lens. It is very much ”not seeing the forest for the trees”.
In fact, Tesla might even object to the usage of the word ”quantum”, since it implies that same top-down view. Tesla would start from the field(s) and let whatever phenomena emerge from that, quantized or not.
Tesla may have been a capable engineer in his early years (before turning into a crackpot and/or fraud). However, his understanding of modern physics appears to have been extremely poor. He certainly has not contributed anything significant to the field.
There are two main reasons why the theory is “incomplete”
1> human perception is not often considered; in many ways humans are like dogs trying to see in full color depth. Humans are “colorblind” to types of information.
2> structure of the flow is not often considered: the assumption is a flow through some X-dimensional space: what if this flow is defined according to a chaotic map: not just a fractal but a subset of other rules which create chaotic flows across multiple dimensions in time which collapse /retroactively/ along a chaotic Riemann geometry
[0] https://www.goodreads.com/book/show/18781406-quantum-field-t...