People are talking about logic gates and computing. While these things are associated with electric circuits, strictly speaking, they don't have to be. It's sort of a separate domain all together.
For something strictly an electric circuit that uses recursion take a look at an op-amp: https://en.wikipedia.org/wiki/Operational_amplifier. Normal operation of an op amp basically involves a feedback loop which is the analog equivalent of recursion.
Basically anything with a feedback loop is recursion.
I will respectfully disagree about opamp feedback loops being recursion. It is almost magical but I do not see any recursive nature in it.
You can't compute fib(n) recursively with O(1) transistors. Maybe if you use the non-recursive exponentiation formula.
Analog electronics is different from digital computation because it's not about inputs and outputs, cause and effect in the same manner. A circuit maintains an equilibrium between inputs and outputs. The input doesn't cause the output as the changing the output may affect the input.
Just a simple resistor is an example: is voltage or current the input to Ohm's law? Neither is.
Same with the opamp feedback loop. It looks like it feeds its output to the input and then repeats. But it doesn't, it just maintains the same relationship between them by pumping electrons from the power supply. Short circuit the output to ground and you will see the circuit load the input circuit more.
That said opamp circuits are very cool. The patent for negative feedback was rejected as "another perpetual motion machine". Yet it works and it works magically.
A (bad) opamp can be cobbled together from 4 or 5 transistors. Yet it can do, together with negative feedback: addition, subtraction, multiply by constant, integral, derivative, exponent and logarithm. Probably others too. Combine log, add, and exp and you have a multiplier.
But it won't do a Fibonacci, a factorial or the Ackermann function.
There is no way I am aware of that could compute anything of recursive nature with a constant number of parts. You need some form of memory ("space") and a method of repeating the same computation ("time") to qualify.
You certainly can build this kind of computer with digital, or even analog, circuits. But negative feedback in opamps is not it.
This is my contribution to the Sunday morning semantics debate on the nature of recursive computation.
> Analog [is] not about inputs and outputs [...] A circuit maintains an equilibrium between inputs and outputs.
Capacitors provide memory and time-varying behavior. That’s one thing I’m sure you just forgot about for a minute. There are analog A/V circuits all around the world that explicitly feed back to themselves, thus in a sense explicitly computing a recursive function.
> There is no way I am aware of that could compute anything of recursive nature with a constant number of parts.
You’re forgetting about tail-recursion. In addition, a stack is still a constant number of parts (all computers & digital circuits have a recursion limit), and circuits are perfectly capable of repeating the same computation, even analog ones, so they qualify there at least.
Capacitors and inductors are "memory" but only one unit or "word" per component. You need more than one to make a stack to have recursion.
Mercury delay lines, neon bulbs, magnetic drums or disks and lots of other components are also capable of storing a quantity for some time.
Tail recursion is just a loop, and we certainly can do simple repetition in a circuit.
I'm not arguing that you can't build a digital or analog computer capable of recursion, you certainly can but it takes O(n) components where n is recursion depth.
But I am arguing that a feedback loop in an opamp circuit is not a form of recursion. It is not capable of computing the Ackermann function or any other recursive function, which I consider the smoke test here.
Yes, some recursion can be implemented in a loop. Op-amp feedback, and analog audio delays, just for two examples certainly are a valid subset of real recursion, the kind that only recurse once, as opposed to two or more times. It is a form of recursion, so I still disagree with your argument. Note the actual question explicitly stated that tail-recursion is in-bounds.
The Ackermann function recurses twice, so yes it would need some kind of memory, which could theoretically be implemented with an analog stack of some kind. And I agree that loop-based (single recuse) op-amp feedback can’t compute Ackermann, but the question was whether recursion is possible in analog circuits, and the answer is that single-recurse already exists widely, and double recurse or more could be done by someone determined, there’s no theoretical constraint.
Indeed - the capacitor and the inductor go together like peanuts and chocolate. Maybe try to find an electronic device you own that doesn’t have an LC circuit in it! ;)
Ofcourse an electric circuit cannot be «recursive» as such, because the physical ontology in which they are defined does not admit mereological (part/whole) relations, that would be «spooky action at a distance».
The signal function, being an interpretation (or specification) of the circuit, is a metaphysical ontology and is often defined in terms of itself at adjacent points in it’s domain, i.e mereological in nature, which is to say: recursive.
> You can't compute fib(n) recursively with O(1) transistors. [...] There is no way I am aware of that could compute anything of recursive nature with a constant number of parts.
Every compute CPU in existence has a fixed number of transistors, so by this argument you can't compute fib(n) on any computer. In one sense this is technically correct because there is no bound on n, so you will encounter some n after which fib(n) will fail.
Obviously this is not what you meant though. The fixed number of parts is not the issue, it's whether they can be arranged to compute the function you want using the equilibrium behaviour you describe. Obviously this can be done if this equilibrium behaviour reproduces digital logic, so I'm not convinced it can't be done by in analog form, there just hasn't been a pressing need for it.
> Every compute CPU in existence has a fixed number of transistors,
Fixed and very large number. It can compute fib(n) for a finite but large number n. Even with infinite time, my computer couldn't compute fib(2^128) recursively because there is not enough memory to store the stack.
You need O(recursion depth) memory, which implies number of components in the same order of magnitude.
Right, but my point is that the component count scaling requirements for an analog circuit are basically the same, so of course you can compute fib(N) using analog circuitry. You can even do it using a fixed number of components, as long as the component count is large enough.
As long as a system can be defined in terms of itself it fits the definition of recursion if we are to follow the official definition of recursion.
Whether that recursion is capable of doing certain things is another matter entirely.
This is a semantic issue. Not worth it to get to deep into how I define a word or how you define a word. But I will say that my definition of the word recursion is more inline with the official definition, and that official definition does encompass an opamp.
The patent for negative feedback was rejected as "another perpetual motion machine" - I’d never heard this! Do you have a reference so I can read more?
Just define an escape condition, like do "recursions until we reach zero". I don't think recursion has to be infinite? One example is tail-recursion which can be statically defined by a compiler.
It seems like you're saying recursion is applying a sequence of functions to each other like f1(f2(f3(x))). This doesn't seem right to me -- it should be more like f(f(f(x)))
> It looks like it feeds its output to the input and then repeats. But it doesn't, it just maintains the same relationship between them by pumping electrons from the power supply.
Why do you say that? Of course there is. It would be more accurate to say all circuitry is built out of function applications, both analog and digital. If you can write an equation for the voltage on a given trace, the circuit applies a function. If you can write a truth table for a given component, that’s a function applicator. The whole reason we can build software functions is because we build them on top of circuit functions. Saying feedback doesn’t exist because it’s just electrons is like saying people are just atoms; it ignores the structure of the system. While all electric circuits pump electrons, signal feedback is a physical reality, one that you can even see & hear first-hand in some circuits. The OP maybe forgot momentarily to think about capacitors and time-varying inputs when they claimed analog circuits don’t have a time component.
For context I'm talking about function application a la lambda calculus. In this case 'function application' is something that's done to discrete symbols. Circuits aren't a discrete system however so there's no mapping.
Functions are a good way to describe things but it's not what they are fundamentally. OP also made a good point about how the inputs can change with the outputs but it's not recursive. It's literally the same process as water flowing more slowly into a lake as the lake gets more full. My intuition with circuits is that you'll end up doing something like that if you try to feed the output to the input without an intermediary store like a delay line or capacitor. And even then it's not recursive unless the algorithm you're running is tail optimized.
> Circuits aren’t a discrete system however so there’s no mapping.
You need to qualify that as analog circuits. And they do accept a discrete mapping, you’re just asserting without trying it.
> Functions are a good way to describe things but it’s not what they are fundamentally.
Yes it is. It is both how and why we build circuits; to apply the function’s effect, to pass it to the next function to build a larger function.
Why are you suddenly ruling out capacitors and delay lines? Those are analog circuitry, and they are absolutely standard building blocks.
> it’s not recursive unless the algorithm you’re running is tail optimized.
What’s wrong with tail optimized? That is still recursive, and it’s still feedback. You’re moving the goal posts. Above you and OP stated feedback (tail-recursion) was not a thing in analog circuits. That’s not true.
Oh sorry I was referring to the goal posts you defined with “There’s no function application in circuitry”, and repeating the OP’s claim that what looks like feedback actually isn’t feedback because circuits are in a static electric balance (which is not generally true for analog circuits).
Okay, I accept that the context was analog circuits, that’s fair, and I apologize for accidentally ticking you off with that note. I didn’t mean to, and I felt like a broad generalization could use the qualification, and all it needs is a single word. But I don’t see how that side-comment amounts to bad faith about anything, nor do I agree with that assessment. I’m arguing directly and with evidence against the claims you made, not with hyperbole, not with ad-hominem, but with facts: analog circuits with feedback are not just something that exists, they are widespread and common. Power conditioners, frequency filters, analog guitar delay pedals, all of these things are real-world counter-examples to the way OP framed it, and the way you defended that claim.
I'll be honest I wasn't meaning to defend OP's claim to the death or anything. I was mostly tracking their argument. I too am somewhat confused as to why just feeding outputs to inputs isn't recursive. I think honestly this is most people's intuition and the burden of proof (and therefore the fun) lies on stating the opposite i.e. it's not recursive.
I say this is bad faith arguing because you're very clearly trying to get me to say that circuits can be recursive. If you were arguing in good faith you'd be OK with "moving goalposts", as long as new ones were agreeably defined.
In this case the goalposts seem to be: analog circuits, with no delay wires, capacitors what have you. Basically circuits with no state.
In this case the reasoning basically boils down to: you have to have at least a stack to do recursion. Stack is state.
Now obviously the world of circuits is much bigger than this and in fact includes all stacks by the Computers Are Circuits theorem you provided. But using that is too easy, there's no fun. Good faith arguing is for fun, bad faith arguing is for points
> I say this is bad faith arguing because you’re very clearly trying to get me to say that circuits can be recursive.
What are you talking about? Yes I am, in fact, arguing that analog circuits can be recursive, contrary to the claim OP made and you appeared to defend for multiple comments in a row. It’s not called bad faith when I question and contradict your claim, it’s called a discussion, argument, or debate.
If you didn’t mean to defend it to the death, that begs the question of why you’re choosing to escalate this conversation into ad-hominem territory. If you’re confused as to why analog circuit feedback is recursive, that begs the question why you said “OP also made a good point about how the inputs can change with the outputs but it's not recursive.”
> If you were arguing in good faith you’d be OK with “moving goalposts”
FWIW, that is not true, and sneaking in changes to your claim is not a particularly good faith move. Nor is accusing someone of bad faith when they point it out. It’s rather amusing that accusations of bad faith are rarely if ever made in good faith; they can be recursive too, just like analog circuits!
You have made several broad generalizations about analog circuits and feedback that are not true, essentially claiming analog circuit feedback and recursion can’t exist, and then after that started qualifying it as if you weren’t talking about capacitors and tail recursion, which contradicts the earlier claim that circuits can’t be recursive. I don’t agree to your new goalposts, and I said so, they are not “agreeably defined”.
> In this case the goalposts seem to be: analog circuits, with no delay wire, capacitors what have you. Basically circuits with no state.
I don’t agree these are the goalposts, you’re arbitrarily excluding common analog components, and this is not what OP was claiming. If you exclude capacitors or any state, you’re manufacturing a meaningless point about a not very useful subset of analog circuits, and saying nothing about analog circuits in general. You never answered the question of why you’re choosing to arbitrarily exclude widely used and completely standard analog circuitry, nor why you didn’t state this before I started mentioning capacitors.
> In this case the reasoning basically boils down to: you have to have at least a stack to do recursion. Stack is state.
Not sure what you’re talking about here. Stack is a digital circuit concept, while analog circuits can and do have state. Like I said, analog circuits with feedback and recursion don’t just exist - they are widespread and common.
>There's a measure for that but I can't remember how's it called
Slew rate for opamps I believe
Also worth noting that the "recursion" without negative feedback will hit a limit when the op-amp is saturated. Similar to a "stack overflow" I guess. Things mostly won't break but you'll get a flat output at the positive or negative rails of the op-amp - which causes some of the pain of analog signal processing.
>Even funnier that the feedback actually increases the bandwidth of the system, compared to the open loop amplifier.
The reason for this is because of how bandwidth is defined, which is basically the point at which the amplifier gain starts dropping the higher the frequency you go.
Adding feedback is effectively clipping the low frequency/ high gain area with the net effect that the gain versus frequency plot is a lot flatter (but now, instead of being able to get say 10,000x gain at 1Hz, you're clipping it to 100x, since you're unable to get 10,000x at 1Mhz)
I don't think that's required for recursion. You only need access to as many previous states as you need for the result, but not all. We often talk about tail recursion optimisation where the stack never grows.
You can implement binary search recursively, which doesn’t care about the stack just the final iteration. I don’t see where the stack unwinding is necessarily for “call the same function inside itself unless you’re done” except in cases where the stack is used to compute a final answer that’s based on the prior iterations. Maybe it’s fair to say those would best be implemented iteratively, but it’s unnecessary that they be.
Iteration+state is equivalent to recursion. Simple feedback circuits can probably do primitive recursion, and for more powerful form of loops/recursion, you probably need to add stateful elements.
Anything that can be defined in terms of itself is recursion. This is the official definition and an opamp fits here.
A lot of people are getting lost in the variations of the semantics like computer recursion vs. analog recursion. I think the first sentence in the Wikipedia article, however, cinches the the general concept.
Yes anything with a feedback loop is recursion because a system with feedback is a system defined in terms of itself.
But these things in the end are semantic illusions. We are wasting huge amounts of effort in defining the boundaries of the definitions behind meaningless symbols.
Does the definition of banana encompass a rotten banana? Is something that has been rotted or digested... Is that still a banana? Who cares. Words are just used for communication. If your point was expressed properly without the need to clearly specify exact semantic definitions then I would just leave it at that.
Logic gates are not recursive i.e. you typically don't feed a gate's output back to its input. Gates are asynchronous and stateless; their outputs only change when their inputs change and they don't retain information.
But if you do feed gate outputs back to their inputs (usually indirectly), then it's a recursive circuit and now it can remember state. This is called a flip-flop, or single-bit memory.
IIR filters are awesome, but they can have some crazy behavior. If you want a neat mathematical framework to understand them check out the z-transform, which maps IIR filters to the space of rational functions (a polynomial divided by another polynomial). You can extract filter's behavior from this function. For example the complex zeros of the polynomials are crucial and have names ("poles" for zeros of the denominator, "zeros" for zeros of the numerator). The special case when the denominator disappears corresponds to FIR filters. IIR filters have decades of research behind them, and are used in all radio equipment.
IIR filters cannot compute in the Turing sense, though. They're linear operators which is particularly evident in the Fourier basis. Complex exponentials (i.e. simple rotations) are their eigenvectors. Much like with neural networks you also need a non-linear operator to get the "real deal".
Well I made one back in 2003, in simulation at least. It was part of an EU funded project called poetic, which in short was making an fpga that could reprogram itself for evolvable hardware purposes. (Alas the fabrication went off track a bit).
At the suggestion of the designer Yann I made a growing delay line. The cpu would poke one cell of our fpga asking for a delay line N units long. This would create a single unit, then that cell would poke an adjacent cell and ask for a delay N-1 units long. Etc. The units could be anywhere on the chip and formed connections to one another with a hardware implementation of dijkstra's algorithm, iirc. When it was all connected you could start using the delay line.
Not an efficient way to make a delay line, but definitely recursion.
(Cross post from the se thread as I just noticed that's 7 years old)
In that sense negative feedback is recursion too because it's taking part of the output signal and feeding it back into the input again.
Negative feedback is closer in spirit to a typical recursive function where you often want each iteration to process a "smaller" argument than the last and thus be closer to some fixed point.
I do vaguely recall that the first time I saw an OP-amp circuit it seemed insane. It's only after you learn about negative feedback, and forcing a virtual ground that you can work with the things and not think they're black magic. That was decades ago, I'd forgotten about it.
Also useful to reduce drive power in hard-switched field-effect-device cascodes: with some mild tricks, stacked cascodes in particular benefit from compensating miller effect with positive feedback. It still all falls down again when the bottom node/switch is turned back on, and the only arguably difficult part is minimizing the extend of damping needed to prevent the positive feedback from oscillating, like it happily does with inductive load.
The approach appears to be of relevance for MVDC power supply applications, as well-compensated capacitances in a stacked cascode can minimize driving/switching power in hard-switched SiC JFET based switches (at around 1~2 kV per JFET, stacked as high as needed), with usefully low loss up to the MHz range of e.g. PWM frequencies.
Switched capacitance converters obviate the need for poorly-scaling higher-frequency magnetics, at the mild cost of needing some more switches (only divide-by-2 seems to be truly effective/ideal, while transformers can do arbitrary ratios), and the arguably massive gain of power density reachable in below-substation power classes (pole transformers, up to electric trains, the latter benefitting especially from skipping both the transformer and the AC filtering needs w.r.t. power-to-weight aspects).
At the US Steel works in Gary, Indiana, they have a 6 stand cold rolling mill. This uses pairs of 19,000 Horsepower motors on each stand to pull a cold slab of steel 6 inches thick like taffy to reduce its thickness.
The electronics of the day (early 1960s) couldn't drive things directly, but they did have SCR packs big enough to drive the fields of DC generators in the Ward-Leonard configuration.
> Mostly, it is used in oscillators to produce sine waves.
Well technically to get sine you need a gain of exactly 1, anything above that will increase distortion.
Now to get it to start oscillate in the first place you need above 1 so old school low distortion sine generators usually have some kind of automatic gain control
I am a very big layman in math, but I couldnt find any real descriptions of what „state“ actually is. Usually its just assumed to be a variable something out of a set of something values and be done with it.
In electronics state imply recursion (and some stability condition). For example, two inverters that feed each other. Then what is a state in this sense? These are two possible configurations of space?
Also how state relates to time? Since FSMs are exactly THE constructs to recognize sequences (the things that define „time“) and how time relates to all of that?
I feel that there is a deeper connection, but I cannot seem to find anything related to this discussion. Any help?
If there is no sync mechanism then they oscillate indefinitely with behavior dictated by their physical specification. At any point in time you can measure their outputs and that would be the current state. Two inverters that feed eachother is one of the worst examples to understand state and feedback loops because it's not stable and messes with your head.
A state is stable if, given the same input it took to reach it, it remains in that state indefinitely. Two inverters cannot be stable as their state constantly oscillates
Another way to view this: try building a stateful recursive circuit out of 2 normally closed relays. Basically it boils down to two separate circuits „folded over each other“
I have done several FPGA algorithms using HLS C++ Templates using tail-recursion. It works very well. Although it is a high-level abstraction, and the underlying logic gate structure is not easily accessible. HLS generates hard to read 'machine' VHDL or Verilog. Though if you really wanted to it is possible to reverse engineer and get the actual logic. The HLS (Synthesizable C++) is only a few lines and easily verifiable, so there is no real need to ever reverse engineer. Also, I assuming we are talking about actual recursive algorithms, not something obvious like feedback, or IIR filters..
Intermediate electronics designer here, not classically trained, coming from software (also not classically trained). I would say absolutely. If we define recursion as the deterministic variance of function output over time, given a single known input, then for a given circuit-implemented function simply use a timer to take the output of that function and feed it in as input thereafter. Gotchas: this probably requires state buffering, and the control flow implications with respect to parallelism and synchronization are essentially why we have MCUs.
Not exactly a circuit, but a coworker once wrote a recursive macro in VHDL that would generate a unanimous vote detector (an N-input AND gate) from a cascade of 2-input AND gates.
How is this not a circuit? Those AND gates will be built out of multiple transistors that require VCC and GND and current etc. It's absolutely a circuit.
Discreetization (what you are referring to) always involves loss, and, rather than instantly getting the "correct" result from your calculations, you will need to wait for enough clock cycles to progress until the remaining error value is less than the (ever present) noise.
For something strictly an electric circuit that uses recursion take a look at an op-amp: https://en.wikipedia.org/wiki/Operational_amplifier. Normal operation of an op amp basically involves a feedback loop which is the analog equivalent of recursion.
Basically anything with a feedback loop is recursion.