I've always felt like RPN is a really powerful lesson in how subtly redefining the semantics of a problem can make it suddenly much easier to solve.
My personal struggle with functional programming is how difficult I find it to be iterative. I've very frequently backed myself into a corner and ended up having to rewrite everything, whereas in C, even when I miss badly on the architecture, I'm still usually able to reuse some big chunks of what I've already done in the end. Maybe it's just inexperience.
I think a lot of people encounter this difficulty when they first get into functional programming, but over time it inverts, where you end up building all these small self contained pure functions that are easy to re-use and recombine and it's only the exoskeleton of the program that suffers from this stuff, so you end up spending a little more time on stuff like effect systems or whatever get it to a state where you can do things like use logging wherever you want without rewriting all your code. Once you cross the chasm the impedance mismatch reverses and you're stuck with the bad feeling in all other code.
Agreed that RPN (and stack machines) are beautiful and underappreciated. Unfortunately, it's for a relevant reason that I have to push back against the author's newfound love of recursion. Like everyone else I had the same brain-exploding moment when I realized that recursion was possible and how if forced me to re-think what functions were capable of, but now that I'm old and ornery I'm of the Hot Take that recursion is a clumsy alternative to a loop, not the other way around. Which is to say, if a portion of a program is going to threaten to execute for an unbounded amount of time, I want that fact to be explicitly called out in the code via a looping construct, rather than having to worry whether any random function call is going to recur (or even mutually recur), to say nothing of the efficiency of holding onto a single piece of loop-relevant mutable state rather than forcing my runtime to handle an unbounded number of stack frames. If your language both guarantees tail recursion and calls it out syntactically, then you get a pass, otherwise I'd be perfectly happy working in languages that didn't support recursion at all.
Recursion isn't about writing functions, it's the foundation of scalable systems.
Deep learning with backpropagation? That's the chain rule of calculus applied recursively over a network, as described by Rumelhart, Hinton, and Williams. We owe LLMs and emerging AI to recursive ideas.
The internet? It's a recursive structure made up of an arbitrary number of interconnected servers. We owe the information era to recursion.
Database queries and programming languages? We prove correctness by reasoning recursively. We architect entire systems of arbitrary size, with correct behavior, simply by specifying the properties of their element types.
Sure, you could probably block recursion within the scope of a program's compilation. But when you're developing scalable systems, recursion is their natural language. The powerful recursive algorithms for these systems are symptoms of their scalability, not mere implementation choices. When forced, stacks and other recursion workarounds are obstacles that get in the way of innovation.
Plus, real-world systems aren't single programs compiled together. When microservice A calls microservice B, there's potential for recursion, regardless of whether the programming language of either microservice supports it. Programmers still have to think recursively. No programming language annotation is going to change that.
The Great Pyramid contains a combinatorial explosion of smaller pyramids inside it. They made it out of blocks. Forty centuries look down upon us.
If I have to traverse a tree, then recursion is more natural to me. With a loop you’ll have to manually use a stack (it’s fine, but more error prone). For lists, I rarely write loops or recursion. It’s mostly folds and maps.
For sure. Data structures and call graphs like to converge, so when designing a data model, you are actually designing the (most natural) program flow too.
I don't know man, very often sophisticated problems require recursive thinking (even if you don't write it so later on) to be able to see similarities in sub problems. Just last week I had to resort to this to flatten-index json files. I started confused and then made an inductive leap and the solution unfolded itself (no pun intented) as a two-liner.
It's way beyond brain-teaser for me, it's the sharpest tool I know of.
I won't badmouth your preference of one form of iteration over another, but I don't follow what you're trying to say about what functions do.
It's not clear if you mean "the functions that you write" or "the functions that other people (library providers? Colleagues? Juniors?) write.
The functions you write do what you tell them to do. If you don't tell them to recurse, then they don't. If you do tell them to recurse, then that has whatever frame optimization or boundedness guarantees that you bake into it (which is the same responsibilities you take on writing a loop just packaged in a slightly different form).
The functions black-boxed by a library aren't something you have a lot of control over, nor in most cases are those black boxed by colleagues. Your concerns sound as though they target these cases, though your preferences really have no traction here.
I've always felt like RPN is a really powerful lesson in how subtly redefining the semantics of a problem can make it suddenly much easier to solve.
My personal struggle with functional programming is how difficult I find it to be iterative. I've very frequently backed myself into a corner and ended up having to rewrite everything, whereas in C, even when I miss badly on the architecture, I'm still usually able to reuse some big chunks of what I've already done in the end. Maybe it's just inexperience.
I found writing in Lisp was liberating, as everything can be reused into a bigger program.
And Lisp is a functional programming language. It is also a programmable programming language, so you can even extend its syntax.
I am interested in your struggle. I want to know what is really difficult about functional programming for a learner.
I think a lot of people encounter this difficulty when they first get into functional programming, but over time it inverts, where you end up building all these small self contained pure functions that are easy to re-use and recombine and it's only the exoskeleton of the program that suffers from this stuff, so you end up spending a little more time on stuff like effect systems or whatever get it to a state where you can do things like use logging wherever you want without rewriting all your code. Once you cross the chasm the impedance mismatch reverses and you're stuck with the bad feeling in all other code.
Made me feel like I understand monads finally...will read again in a couple days when I inevitably forget.
Out of curiosity, does this post help it stick any? https://philipnilsson.github.io/Badness10k/posts/2017-05-07-...
Agreed that RPN (and stack machines) are beautiful and underappreciated. Unfortunately, it's for a relevant reason that I have to push back against the author's newfound love of recursion. Like everyone else I had the same brain-exploding moment when I realized that recursion was possible and how if forced me to re-think what functions were capable of, but now that I'm old and ornery I'm of the Hot Take that recursion is a clumsy alternative to a loop, not the other way around. Which is to say, if a portion of a program is going to threaten to execute for an unbounded amount of time, I want that fact to be explicitly called out in the code via a looping construct, rather than having to worry whether any random function call is going to recur (or even mutually recur), to say nothing of the efficiency of holding onto a single piece of loop-relevant mutable state rather than forcing my runtime to handle an unbounded number of stack frames. If your language both guarantees tail recursion and calls it out syntactically, then you get a pass, otherwise I'd be perfectly happy working in languages that didn't support recursion at all.
Recursion isn't about writing functions, it's the foundation of scalable systems.
Deep learning with backpropagation? That's the chain rule of calculus applied recursively over a network, as described by Rumelhart, Hinton, and Williams. We owe LLMs and emerging AI to recursive ideas.
The internet? It's a recursive structure made up of an arbitrary number of interconnected servers. We owe the information era to recursion.
Database queries and programming languages? We prove correctness by reasoning recursively. We architect entire systems of arbitrary size, with correct behavior, simply by specifying the properties of their element types.
Sure, you could probably block recursion within the scope of a program's compilation. But when you're developing scalable systems, recursion is their natural language. The powerful recursive algorithms for these systems are symptoms of their scalability, not mere implementation choices. When forced, stacks and other recursion workarounds are obstacles that get in the way of innovation.
Plus, real-world systems aren't single programs compiled together. When microservice A calls microservice B, there's potential for recursion, regardless of whether the programming language of either microservice supports it. Programmers still have to think recursively. No programming language annotation is going to change that.
The Great Pyramid contains a combinatorial explosion of smaller pyramids inside it. They made it out of blocks. Forty centuries look down upon us.
If I have to traverse a tree, then recursion is more natural to me. With a loop you’ll have to manually use a stack (it’s fine, but more error prone). For lists, I rarely write loops or recursion. It’s mostly folds and maps.
For sure. Data structures and call graphs like to converge, so when designing a data model, you are actually designing the (most natural) program flow too.
Linear recursion vs tree recursion.
I don't know man, very often sophisticated problems require recursive thinking (even if you don't write it so later on) to be able to see similarities in sub problems. Just last week I had to resort to this to flatten-index json files. I started confused and then made an inductive leap and the solution unfolded itself (no pun intented) as a two-liner.
It's way beyond brain-teaser for me, it's the sharpest tool I know of.
I won't badmouth your preference of one form of iteration over another, but I don't follow what you're trying to say about what functions do.
It's not clear if you mean "the functions that you write" or "the functions that other people (library providers? Colleagues? Juniors?) write.
The functions you write do what you tell them to do. If you don't tell them to recurse, then they don't. If you do tell them to recurse, then that has whatever frame optimization or boundedness guarantees that you bake into it (which is the same responsibilities you take on writing a loop just packaged in a slightly different form).
The functions black-boxed by a library aren't something you have a lot of control over, nor in most cases are those black boxed by colleagues. Your concerns sound as though they target these cases, though your preferences really have no traction here.