Absolutely every single time when I use functional programming, I store my intermediary calls in a variable, specifically because naming that variable forces me to explain what that intermediary result should be.
If the intermediary result makes no sense, and only the function composition makes sense, I'll create a new well named function that does the chaining, even if it's single use.
This is literally the only way I've seen multi-year projects be maintainable, and anything else eventually breaks down into uninteligible code.
Writing code for other developers in the long-run is empathy for yourself, since you'll eventually forget all the context you once had.
Adding intermediate variables like "newline_delimited", "lowercased", "sorted", etc... is just pure noise. It's the equivalent of a newb programmer putting a comment over each line of simple code explaining in English what that code does, despite it being clear already.
"There are only two hard things in Computer Science: cache invalidation and naming things"
One of the great benefits of pipelines and tacit programming is the ability not to name intermediate results, especially when those results speak for themselves, are not re-used, and have no significance in isolation.
> Adding intermediate variables like "newline_delimited", "lowercased", "sorted", etc... is just pure noise.
I don't fully agree. Even when each step is fully descriptive it greatly helps to see the intermediate result, and names can be used for that as well. It is true that some intermediate names are not required, partly because some steps are best understandable in conjuction with neighboring steps (e.g. `sort | uniq -c` is a very common idiom and splitting them would do much harm than good), but a healthy dose of names would help in general.
I would say that there are three major steps in this particular pipeline: normalization (`tr -cs A-Za-z '\n' | tr A-Z a-z`), frequency calculation (`sort | uniq -c`), and extraction of first ${1} largest entries (`sort -rn | sed ${1}q`). So it is reasonable to have two additional names between them. Or you can name each step with a function so that you don't need intermediate results to understand that (`norm-words | calc-freqs | keep-largest ${1}`).
> It's the equivalent of a newb programmer putting a comment over each line of simple code explaining in English what that code does, despite it being clear already.
That is more about repeating functional parts without describing any intention. Comments about intents and clarifications are absolutely fine. For example:
counter += 1; // increment the counter by one (bad)
counter += 1; // increment the global counter, don't need synchronization here (better)
g_counter.incr_without_sync(); // (even better, but not always possible)
While I agree that sometimes I do break my rules myself, when I don't want to spend 15 minutes to name something, I don't think your example convinced me of your case.
Would you honestly expect someone to know by heart what "sort -rn" or "uniq -c" do? This forces the reader to know what all the arguments mean by anyone reading this code.
If you'd try to push this code, I wouldn't let it pass code review without a comment on each line (except plain "sort", that one's really obvious).
> Would you honestly expect someone to know by heart what "sort -rn" or "uniq -c" do?
For people that program regularly in bash, I would. If it was a rare bash script in a code base where many team members didn't know bash well, comments would be appropriate. Even there, though, that's not the same as introducing superfluous intermediate variables.
The larger point here relates to "intended audience" or "what competencies may I assume the reader has?". This is a matter of art, not science, and highly context dependent.
Take an extreme version of the point you just made:
const x 4*2;
"Would you honestly expect someone to know by heart that `*` in JS means multiplication?"
Clearly that is absurd, because that knowledge is an assumed competency.
What about `**` for power?
What about `^` for XOR?
What about knowing that `-~x` is equivalent to `x+1`?
Where exactly do you draw the line? You can "err on the side of over-commenting" but only so much, because taken to an extreme it hurts readability, and will annoy everyone.
Responsibilities are distributed. Is clarification the responsibility of the author or the reader? What can I assume "a reasonable reader" should know? The answer depends on many things. But the right answer isn't a blanket policy of assuming incompetence and explaining every detail with comments or intermediate variables.
I think we're in agreement on the framework of the debate (rare), it's just that from personal preference we draw different lines on what information is reasonable to assume your reader has.
I personally have not regretted much assuming incompetence in my future self on small things, but it's your choice to what standard you hold yourself against.
Tacit programming shines when you have combinators over some closed, well defined domain - ie. parser combinators, runtime assertion combinators, iterator combinators etc.
Well, none of that depends on your functions being point-free or you writing their parameters down.
The Haskell culture in particular has a very strong idea of naming all the things that can have a good name, and none of the ones that can't. Point-free functions are about that last part, but you only touched the first part. (Personally, I think the culture is too radical, but you can't really argue against a principle like this.)
I've been using tacit programming intensively to the extent I get rid of most variables. I use both syntactic threading macros and functional combinators to achieve this. It is a double-edged sword in that it can make code as ugly as the original code it is trying to improve upon, but working without variables isn't that difficult nor does it lead me to intense clusterfucks. Composing lambdas contribute to this a lot more though.
As for making things clearer for everyone, well, this is code I work on solo (hobby), but I think providing example inputs as well as debugging macros can help a lot. Consider the following:
I actually write this kind of stuff in my code ahahaha (the right move is to implement assoc-> of course hahahaha). Now there are combinators I wrote I never use, like departializers, unappliers, argument shifters, etc
will do when I share with you the streaming string library I talked about on your discord. I use these arrows in every single namespace I write so this is already part of the plan.
Sometimes, the repetition of encounters is just as meaningful as the people we meet.
You are applying functions to arguments, aren't you? So "point-free" means you cannot apply a function to arguments on the left hand side of a definition, but you are allowed to do so on the right hand side? If that's point-free, it is also point-less.
tacit means a definition doesn't name it's arguments, not that it isn't applied to arguments. the word tacit means implied, so arguments are implicity worked upon, instead of explicitly.
Yes, but that is the same as not applying a function to arguments on the left hand side of an equation, but still doing so on the right hand side. The question is, what is the point of not doing that on the left hand side, if you then go and still do it on the right hand side? Let me say it again: It's pointless.
There is nothing to understand here. As I said, it's pointless.
From the Wikipedia article:
> Tacit programming is of theoretical interest, because the strict use of composition results in programs that are well adapted for equational reasoning.
Now this is bullshit. See my explanation, as equations are symmetric, and you can swap left and right hand side.
No, you’re just failing to understand what “equational reasoning” means.
It has little to do with the idea of LeftSide = RightSide. In fact you don’t even need an “equation” in that sense, or any equality sign, to do equational reasoning.
Equational reasoning is when a program is evaluated (ie.: like when you simplify or factorize an equation in maths) by using substitution (or rewriting) rules: https://en.wikipedia.org/wiki/Rewriting
I use point-free languages, and it’s completely okay if you don’t enjoy them. It’s just that I love them, and they’re fun for me, so I don’t like when people are just saying that they’re like objectively bad. Try them out, see if you like them! I personally like a concatenative/stack-based flavor, but you might not! :)
In this example I can’t imagine anyone preferring the second style, but there are cases where it’s nicer. For example compare the tacit:
foo = h . g . f
With the more verbose:
foo x =
let
a = f x
b = g a
c = h b
in c
If a, b, and c have useful names that help you understand the code then the second function might be preferable- but in a lot of cases all the intermediate variables are just adding noise and making it harder to see what’s happening at a glance. The tacit example makes it very clear at a quick glance exactly what’s happening.
My personal rule of thumb is that if you are passing combinators in as arguments to other combinators then you should probably stop, but straightforward chaining is usually okay.
tacit programming means you don't use argument names to direct your data to the desired output. what's interesting to me about that is the unexplored possibilities of how data could be directed without names.
I love the concept of point-free programming - write your function by simply concatenating the transformations you want.
I just hate reading the resulting code written by others. What information is expected to come in, and exactly what data passes from one step to the next, and in what position? Data type signatures only go so far.
Point-free means you have all that wiring in your head, without assistance from the notation.
I like reading and writing point-free code, I just really hate debugging it. The debuggers/IDEs are (usually? are there exceptions) not really geared up for it and so the debugging experience is basically one of just having 1 call. This goes for more lambda style calling mechanisms of course. I end up pulling it apart, but that's only because of bad debug/ide support in my case. I can read/write it fine; in most cases it just works and then I like it better than the (verbose) alternatives.
> Point-free means you have all that wiring in your head, without assistance from the notation.
I completely agree. It fatigues me to read unnecessarily point-free programming. I have to translate it into a point-ful style in my head to understand it.
For example, you could take this piece of Haskell code and make it more point-free. I think it's readable at first, if redundant (you could remove the last parameter xs, for example).
-- map: apply the function f to each element of the list xs
map :: (a -> b) -> [a] -> [b]
map f xs = foldr (\x xs -> f x : xs) [] xs
Some people would prefer to write it like this:
map :: (a -> b) -> [a] -> [b]
map f = foldr ((:) . f) []
That lambda takes more effort to parse and think about, though, at least for me. The first version was pretty readable. Now, when I read "((:) . f)" I'm thinking "okay, so the function f takes an argument, and passes the result to the (:) function, which normally takes two parameters; with one argument, it returns another function that takes one list parameter and returns it with the result of "f x" prepended to it." And to do this, I have to know implicitly how many arguments the function (:) takes in order to parse and understand it correctly (though in this case, it's obvious, because (:) is ubiquitous).
Pointfree.io would take what I wrote and transform it into
map = flip foldr ([]) . ((:) .)
But I'm pretty sure no one would write that. It takes even more effort to correctly parse this.
That said, I don't write any Haskell. I just used to try to, but found I didn't like it.
I'm smiling a bit because I know exactly what you mean and generally agree, with an exception. A common idiom in Elixir is to return {:ok, result} or {:err, :reason} from calls that can fail. Leaning on that idiom, good function names, and good errors goes a long way:
When I define the functions I'll use pattern matching on the first argument to each function so that you can either pass in a "user" or {:ok, user} or {:err, _}. If it matches the error pattern it'll just return the error unmodified. The errors have enough fidelity to make it clear where the pipeline failed. I'm not sure this is a super common pattern though but it worked pretty well for me.
I am not sure about Elexir, but in Ocaml, someone using this style would gravitate towards Result.map and Result.bind
(Defined like this: given (Ok a) or (Err b),
1. It de-clutters your functions (you do not need that match statement anymore)
2. It becomes evident
- which functions will simply pass down errors (bind or map) vs. which ones may handle them
- which functions may raise new errors (bind can, map can't)
Just like there is "decision fatigue" [1] I believe there is "naming fatigue", and naming are one of the three great problems of computing, as everyone knows.
I'd say that at least point-free prevents naming fatigue, but for the result to be nice to the reader, the naming and factorization is much more important than with explicit code, which has sort of much more "safeties".
On the specific question you ask, let's hear the creator of one of the major stack oriented languages, Forth, by its inventor when he had about 30 years of professional practice. He talks about code comments, which is the local solution that comes immediately to mind when thinking about the issues you pointed out:
So people who draw stack diagrams or pictures of things on the stack should immediately realize that they are doing something wrong. Even the little parameter pictures that are so popular. You know if you are defining a word and then you put in a comment showing what the stack effects are
and it indicates F and x and y
F ( x - y )
I used to appreciate this back in the days when I let my stacks get too complicated, but no more. We don't need this kind of information. It should be obvious from the source code or be documented somewhere else.
Well, comment-less code isn't popular either, but we're not here to design the next Python anyway. By the way, Chuck Moore evolved his language until he determined that the remaining problems were "in the hardware". He did his own CAD tools with his language to design stack-oriented CPUs with some success, as some of them ended up in spacecrafts (notably Philae in 2014 [3]).
In my experience, when you want to keep the data flow as simple as possible (in Forth: keep "stack juggling" to a minimum), there's very often only one good order for parameters. That works very well for the writer, for whom the understanding of the problem helps with memorizing signatures and semantics.
As a reader (of other's code), I have much less experience, but I guess that understanding the same things requires the readers to invest more time up front, in addition to the "accidental obfuscations" (poorly written code, unusual programming habits and conventions...).
It can map a description of a thing to a name. And the descriptions can vary between developers and probably still end up with the same name. So if everyone relies on its naming-prowess then everyone ends up naming things consistently.
Aside: I wonder if it can name parameters from tacit-style...
Seems it can. Look how the output of `uniq` was `counted`.
Maybe we should just let ChatGPT name everything.
Interesting to think what programming is you no longer have to name things. A lot of programming is grouping code and coming up with names for the groups.
What if we didn't worry about grouping or naming, and just dealt with the data, and let chatgpt provide names on-the-fly.
Functions could even be replaced by descriptions of the actions.
I wonder if you could let an LLM evolve a graphical operating system from just the hardware definition.
We went through ~70 years of trial and error for programming languages going from low-level to progressively higher-level, and kept all the baggage along the way.
How would an AI go about this...with all the knowledge of today? How would it reason about each step in evolving a language.
I think this style requires familiarity with the codebase that its using before it actually becomes readable, but once you have that familiarity reading and understanding what's happening is a lot faster. Like Rxjs for example, if you're unfamiliar with the library its like hieroglyphs, but once you develop familiarity with it you can read other people's code much faster than if everything was implemented procedurally. That said if people are paying me money I usually avoid doing point-free, except for Rxjs. I love Rxjs so much.
The wiki page leaves out the source of the the 'point-free' nomenclature - category theory ( e.g. https://en.wikipedia.org/wiki/Pointless_topology ). The original game was talking about sets (or other similar objects) without talking about 'set membership'/'elements', whence 'point-free'; you want to only talk about functions between sets (or morphisms between objects), and build everything up from those.
The name "tacit" comes from the APL family as far as I know. It certainly fits with Iverson's style, as he was fond of seeking out just the right word to describe something regardless of obscurity ("ravel", "copula", etc.). I think the name would have come about after the development of function trains in 1988, and I found a paper "Tacit definition" about Iverson's J from 1991: https://dl.acm.org/doi/10.1145/114054.114077 (digitized at https://www.jsoftware.com/papers/TacitDefn.htm). Not knowing when "point-free" started to be applied to programming, I can't say which is first. I doubt J's developers were aware of "point-free" in any case.
> The wiki page leaves out the source of the the 'point-free' nomenclature
The wiki page explains it in the very first sentence: “… in which function definitions do not identify the arguments (or "points") on which they operate”.
Tinkering with APL (Dyalog) gave me one of my most mind-bending programming moments.
dismal ← 10⊥(⌈/10⊥⍣¯1⊢)
This is the complete solution to addition in the framework of Dismal Arithmetic [1].
The pivotal idea there was the inverse of a function, and "trains". Until that moment of insight, I was fiddling about with dfns, which looks janky in comparison.
dismal ← {10(⊤⍣¯1)⍵}∘{⌈/⍵}∘{10(⊥⍣¯1)⍵}⊢
⍣¯1 is APL for "inverse". Aaron Hsu helped me understand [2] what was going on.
I feel like this kind of conceptual power is tacit programming at is finest. It blows my mind that this alien gobbledygook code still makes sense to me after not touching APL for years now. And I've only toyed around with the language.
And if anyone wants an absolute masterclass in tacit programming, have a look at Aaron's Co-dfns compiler. The README has extensive reference material. https://github.com/Co-dfns/Co-dfns/
Interesting, never heart about dismal arithmetic before. Squint hard enough and it starts to look like tropical geometry. This is a different construction where you give the addition and multiplication operators in a polynomial a different meaning. I've been trying to have a better understand why this is useful. I know tropical geometry has been used to improve things like price discovery, but I never got to a better understanding as to why. I At any rate I would be curious to know what dismal arithmetic can do for me.
I can't seem to figure out what you mean by inverse. Is the inversion of addition subtraction? Neither post seems to explain what it is (or maybe it assumes knowledge of ⊥ and ⊤?)
>The pivotal idea there was the inverse of a function
I was curious about this piece of unexplained cryptic code, so I did a little investigation to see what's going on. Unfortunately there aren't any revolutionary concepts here, just esoteric notation. I'll explain:
To do that "dismal addition" thing, you need to split a number into digits and build a new number using the largest digit at each position. dismal(123, 321) = 323.
APL gives you an operator to make a number out of a sequence of digits: that inverted T you see in OP's code. The left operand is the base. `inverted_T(10, [1, 2, 3]) = 123`.
It gives you another operator to split a quantity into a hierarchy of units. That's the Tee you see in their second snippet. The left operand is a sequence of radixes. An inch is 2.54 cm, a foot is 12 inches, a yard is 3 feet. To transform 130 cm into ft/yd/in, you'd do: `Tee([3, 12, 2.54], 130) = [1, 1, 3.18]`.
So, OP wanted to use this Tee operator to split a number into digits. The problem is, they don't know beforehand how many digits the number has! If it's 2 digits, they must do `Tee([10, 10], number)`. If it's 3, they must do `Tee([10, 10, 10], number)`. (Because `Tee([10, 10], 123) = [12, 3]`). So in the second snippet they tried to do some juggling to get the number of digits and use it in the Tee function (I guess).
What OP really needs is the inverse function of inverted_T. And wouldn't you know it, APL can give you the inverse of a built-in function or a sufficiently simple user function. How? Maybe an operator? No...
See that operator that looks like a puckered face? That operator applies the function to its left, as many times as the operand to its right, to whatever is to the right of the sideways T. BUT, if the right operand is negative, it applies the inverse of the left operand. Basically, the all-powerful "invert function" operation is hidden as a special case of another operator...
In sum, here's my interpretation of OP's code in pseudocode:
Using:
encode(base, seq) = <base> inverted_T <seq>
max(a, b) = a gamma b
reduce(fn, seq) = <fn> slash <seq>
superapply(fn, times, seq) = <fn> puckered <times> sideways_T <seq>
let dismal =
encode(10, reduce(max, superapply(encode(10), -1)))
So,
dismal([123, 321, 111])
applies the inverse of `encode(10)` one time to each sequence item, giving:
[[1, 2, 3], [3, 2, 1], [1, 1, 1]]
Reduces using max
max(max([1, 2, 3], [3, 2, 1]), [1, 1, 1])
giving
[3, 2, 3]
and encodes it in base 10, giving 323.
So that's it. Nice standard library, awful syntax.
I think that OP's "epiphany" was finding a quirk in this esoteric language to counteract another quirk.
Anyway, having satisfied my curiosity, I'm going to promptly forget everything about this :)
unsigned dismal(unsigned x, unsigned y) {
unsigned z;
for (z = 0; x || y; x /= 10, y /= 10)
z = z * 10 + (x % 10 > y % 10 ? x % 10 : y % 10);
return z;
}
Seems like a maintenance nightmare tbh. You have to jump at a billion functions definitions before you can hope to understand what a program is doing.
I could see myself transforming a functional program into a procedual one as I try to understand it just so I can have all information just in front of me and not have to keep things in my head.
Like I get that a functional style helps with the mathematical correctness side of things. But I'm just one of those people who still hold "code is meant for humans" first.
> You have to jump at a billion functions definitions before you can hope to understand what a program is doing
On the contrary, if you assume that your functions are good abstractions then you shouldn’t need to know their implementation details in order to compose them. You can tell what it does by looking at just what you have in front of you.
If that’s not the case, then you’re not looking at a good example of this concept. And we all know we can find bad examples of any idea or paradigm— what is useful is the good examples.
> > You have to jump at a billion functions definitions before you can hope to understand what a program is doing
> On the contrary, if you assume that your functions are good abstractions then you shouldn’t need to know their implementation details in order to compose them. You can tell what it does by looking at just what you have in front of you.
In maintenance, you often cannot assume that your functions are good abstractions. One of them is doing something wrong, or at least something that needs changed. Which one? You have to go into the implementation details in order to find where you have to begin to work.
That's not "bad examples" of FP. That's just how software maintenance is. And maintenance is going to happen to FP programs too...
Point free when done with _andThen_ instead of _compose_ isn't that much different from reading basic statement oriented programs delimited by semicolons:
(
f andThen
g
)(x)
vs
a = f(x);
g(a);
_andThen_ is of course just compose with the arguments flipped such that you can compose from left to right instead of right to left.
In FP languages, and languages that use FP combinators, it is usually more efficient to use composed functions with combinators that do iteration and copying like _map(list, f):List_, because the iteration and copying happens during each map application:
map(map(l, f), g)
and
map(l, f andThen g)
produce equivalent results, but the second is faster and has fewer allocations.
For determining the arguments to f and g and map, typed fp languages usually have ide features which can show you the inferred type of each expression, or you can jump to the definition and see it, usually by hovering or key chord while the cursor is over it.
map[A, B](
fa: List[A],
f: A => B
): List[B]
and etc.
This allows you to read things easily and avoid cluttering the code with types where they can be easily inferred.
Point-free style is important to make the usage of such combinators acceptable but experienced devs do extract and name a composition when it becomes difficult to understand. Of course, the treatment of functions as effect-free black box transformers for equational reasoning also makes these refactoring extractions to variables safe to do.
You get a feeling for what is too much over time, and settle on when to use point free style and when not to.
On efficiency - good compilers can often identify nested/chained map applications and rewrite them as a single map application during compilation, but map fusion isn't guaranteed by all compilers or all _map_ instances. The evaluation strategy (lazy/eager) of the language and/data structure also plays a role in whether or not map fusion with point-free style is more efficient or optimisable.
> One of them is doing something wrong, or at least something that needs changed. Which one?
While this is as true in FP as it is anywhere, my experience is that it’s rarely true of the kind of small pure functions that tend to be composed like this. When someone is using a chain of functions like this, they usually are good abstractions that don’t change, and are short enough to be trivially correct.
The first time I've read about tacit programming is in one of the blog posts on oilshell [1].
Really like the idea and apparently one of my favorite features of UNIX is the pipe facility, and it's actually a form of tacit programming. This makes a lot of sense since the OS itself is already full of ready made functions that can be exposed through the API [2].
It's also interesting to note that most of the modern programming languages for examples Python, Ruby, Perl, and JavaScript are missing or have awkward support of this feature.
[1] Pipelines Support Vectorized, Point-Free, and Imperative Style:
The graphical dataflow languages that are popular for music and audio programming (thinking Pure Data, Max/MSP, Reaktor etc) do this in a way that makes the value of "namelessness" a bit more clear. When you connect the output of one unit to the input of another unit, there's no need to name that linkage, but it's still perfectly clear what's flowing over it.
I really enjoy trying to think through programming problems in a dataflow style like this. It often turns the problem inside out in an interesting way. I did about half of this year's Advent of Code problems in such a system and it was a blast.
the languages listed (Pure Data, Max/MSP, Reaktor) are all visual languages where you connect up boxes with wires, similar to labview. I usually associate the term "Dataflow language" to refer to these types of languages, but I'm not actually sure if there are also text-based dataflow languages. Maybe Faust (which is also audio DSP focused)
This technique is highly useful in moderation, provided you are working in a language where the syntax/semantics affords it. It can certainly be overdone to the point where the code becomes harder to decipher for a human reader.
A key step is that programming languages which use the style builds a vocabulary of commonly used operations and make these into general knowledge for programmers using the language. Thus, succinctness is obtained and you can compress hundreds of lines into a few. Flip side is a steeper learning curve, which has to be balanced against.
Especially in the query builder example, since it is essentially building up an AST, the type of the top-level object can change in each call, but since it isn’t named it also doesn’t need to be typed, so there’s no issue with variable shadowing etc.
The framing of the builder pattern as primarily a Rust feature, and fluent interfaces as primarily an OO feature, is a bit odd.
A prime example of non-builder fluent interfaces are containers in Rust.
Both builders and fluent interfaces exist pretty much in all languages that can support chaining methods on some sort of object. This includes OO languages as well as totally-not-OO languages like Rust that just stop short of calling their classes "classes."
The builder pattern [0] is orthogonal to fluent interfaces. It has merely become customary for builders to have a fluent interface. The defining feature of the builder pattern is that you don’t set configurable properties on an object itself, but that you use a separate object (the builder) to set the properties on, and then have it build the final object (on which the respective properties are then usually immutable). It’s immaterial whether the builder object has a fluent interface or not. The important thing is that the builder interface is separated from the resulting object’s interface.
Just to add a bit of clarity: GP's second example isn't a builder pattern, because it's missing the actual build step. Builder patterns use method chaining to configure an object, but the final result is an object of a different class, something that doesn't have those initial methods but instead lets you use the result of what was built up. Java's StringBuilder is a classic example, which exists to avoid the O(n^2) of concat'ing multiple values sequentially, after which you use .toString() to return the actual String object built up. GP's first example is one where the builder pattern lets you name constructor arguments, and during the final build it would throw an exception if you have an incompatible combination.
Whether GP's second example counts as a fluent interface is a bit iffy just because of SQL keywords and how it works in general, but possibly a good way to think of fluent interfaces is madlibs: "Find a ___(noun) that is ___(adjective)". Instead of creating a function with complicated arguments like "find(noun, adjective)" or "find(Noun(n, adjective))", you'd mimic the sentence structure with something along the lines of "collection.find(noun).thatIs(adjective)" - the chained methods interact with each other to flexibly specify complicated arguments. The ".thatIs()" could for example be completely omitted or specified multiple times to further restrict what "noun" to return. The query builder is iffy because while it looks like one on the surface, it's actually plain method chaining where each call modifies and returns the original object without the context that's passed to the next call in a fluent interface.
Method chaining is the simple syntactic pattern that enables both of these other patterns. An umbrella term they both fall under.
I didn't agree on it. It's like stating the principle of procreation and someone quoting a pornstar, and now we name it James Deen. Hope this helps, mrn.
When I write JS (not TS), I prefer as much as possible to use destructuring in every function definition. Named keyword arguments.
Try to change the name of something as little as possible even as it gets passed around. It’s more powerful than a type system in some ways. Not always doable, but just try it sometime; it’s fun.
I'm responding to an unrelated old comment of yours about questioning all assumptions. In Forth, Chuck Moore said local vars are dangerous. Rather, everything should be a global. (However multithreaded Forth implementations do require locals.)
I've never found this style readable unless with pipes (bash / elixir), where I love it. With any other syntax, I find it just adds mental overhead. Maybe because you have to read it backwards?
Pipes in both of the languages you specified do function application, not composition, so they’re very much point-ful (you see the arguments you pass/get passed).
Clojure (and I'm sure other lisps and programming languages) have a nice solution to this, the `->` macro ("threading")
You'd do something like:
(save (transform (fetch))) ;; calls fetch, then transform, then save
(-> (fetch)
(transform)
(save))
Not that the non-threading version was hard to read, but once the function names start to be a bit longer and involve arguments, the threading version tends to be a lot easier to read.
Tacit programming + llm control library (like LangChain) has a lot of untapped potential. Tacit programming shines when the control structures are simple (and functions/variables can be anonymized), and these patterns occur frequently in programming on top of llms.
Love point-free. It's definitely a muscle you need to keep in shape, and although it's definitely not for everybody, if you design your API with this in mind it can be a lot of fun to program in this style. I think TidalCycles is a nice example of such an API (for the most part)
Wow, seems pretty obvious what this was written for (even if they didn’t know it): building commands/prompts for LLM ensembles. It’s just too perfect —- the LLMs could easily implement their own combinators. Would be a huge step towards self-improving systems…
How is omitting the names in any way beneficial? It all ends up as memory addresses in the end. Surely the important point is the sequential data flow.
I often use point-free style myself (pipelines), but I'd never pretend it's good practice. Pipelines are a quick-and-dirty hack for when you're too lazy to use a better programming paradigm. They're "write only", in that it's quicker to write one from scratch than to understand an existing one.
"Tacit programming" as a concept sounds like typical mathematicians' obscurantism, like their love of single-character variable names and implicit operations.
If the intermediary result makes no sense, and only the function composition makes sense, I'll create a new well named function that does the chaining, even if it's single use.
This is literally the only way I've seen multi-year projects be maintainable, and anything else eventually breaks down into uninteligible code.
Writing code for other developers in the long-run is empathy for yourself, since you'll eventually forget all the context you once had.