In the most recent episode of our podcast, @cwebber and @emacsen take a walk down memory lane about 8 bit computers, DOS and Unix command lines and the way that the imagery and mystique of the terminal may be holding us back.

@librelounge great episode! I've been often thought it would be great to develop interpreters that allow users to enter commands and get responses in more natural language. The standard #UNIX CLI was designed for a time when every byte of RAM and hard drive space was a precious resource and I definitely don't want to lose that sense of econony from computing, quite the opposite. But there's no harm in adding trainer wheel layers that make using CLI more like playing a MUD.
@cwebber @emacsen

@strypey @librelounge @cwebber I played around with developing natural language frontends to Unix commands in college.

The challenge is not developing a frontend, but keeping the expressiveness and composability when you do. This may be tied directly into the fact that the interchange format in Unix is just plain text being piped.

If the data structures were more sophisticated, we could elevate the "dialog" to be more rich as well.

@emacsen @strypey @librelounge @cwebber
I thought it might be nice to add a menu-based helper wrapper around GNU Coreutils. You start it and it prompts you for what you want to do along with offering keyword searches, then walks you through the options. When done, it prints the actual Coreutils command and then invokes it.

But I admit I'm not ambitious enough to write it.

@emacsen @strypey @librelounge @cwebber I was thinking about that while listening to the episode - such a frontend might also be feasible for voice interaction. If one command knew it was outputting files and the next command knew what files were... maybe voice controlled pipes would work?

@jfred @strypey @librelounge @cwebber The hard part isn't the pipes, it's the data processing bit.

If we look at existing voice systems, they're very closed. It's a calendar subsystem, or a music subssytem, or a wikipedia lookup, etc.

No one AFAIK (prove me wrong) is doing pipes.

I was briefly involved in a replacement for Yahoo Pipes back in the day, but the tech wasn't there yet.

@emacsen maybe I don't understand the problem. Here's a (possibly) related anecdote. When I was learning Te Reo Māori (native language of #Aotearoa/ NZ), I remember struggling to figure out how to chain simple clauses into complex relational sentences (not that I would have described it that way at the time ;) Then I realized that Māori doesn't have words like "with", it just uses "ki" (at, to, or towards) and "i" (from) to 'pipe' one clause into another. (1/2)
@jfred @librelounge @cwebber

@emacsen couldn't a translation system interpret all of English's many relational terms as variations on "to" and "from", treat both as a pipe, and put the command clauses on either side of the pipe depending on whether it's "to" or "from"? For example, a natural language command like 'list system processes and find gimp' could be translated as 'list process TO find gimp', which translates to 'ls -l | grep gimp'. (2/2)
@jfred @librelounge @cwebber

@strypey @jfred @librelounge @cwebber Yes, perhaps it could, but for things to be really interesting, the interactivity would look something like:

"Find all the pictures of my dog that I took from my vacation last year".

English isn't so linear.

@emacsen sure, there's a reason folks have turned to machine learning instead of manually coding mappings ;) But I still think a useful translation layer could be built up over time, for example, starting with just allowing more natural language words to be translated into "grep" or "cat", and making terminal spit out more verbose responses whether commands work or fail. In fact, I've seen evidence this is already happening, like command suggestions.
@jfred @librelounge @cwebber

Such interfaces can work if (and only if) they're interactive. There must be unambiguous feedback.

The problem with ML is that there is no way to test it. It can produce wildly wrong results for certain inputs and you won't know until you present it that input. Feedback can catch those cases.

Visual feedback might be best because it can be nearly instant.
@cwebber @jfred @emacsen

@freakazoid just to clarify, I'm not proposing to use ML, just pointing out that the reason folks have headed that way is that natural language has myriad variations and trying to think of them all and code them is a nightmare. What I'm proposing is to use old fashioned mapping of this to that, like people do when designing natural language text interfaces for MUDs. Starting simple and building it up over time.
@emacsen @jfred @cwebber

@librelounge @cwebber @emacsen great episode, wish it was longer because there is a lot to unpack there

@librelounge @cwebber @emacsen curses somehow being equally inside the machine as command line, I think may partly relate to the terminal abstraction being so spare & simple and so influenci g dev of such tools ... similar to the 8bit machines' graphics. Dev affordances

You mentioned that there exist diff tools that can operate on S-expressions. Could you provide some links to those? I've always wanted such a diff tool.

@librelounge @cwebber @emacsen my first Libre Lounge episode!

The observation that the shell feels increasingly less powerful given the forms of data we manipulate these days, and the contrast with both Lisp Machines (which gives the user a full programming language) and Python makes me wonder.

- we have a visual interactive REPL interface,
- we have the shell that's built on top of Python

How well do the two integrate?

Sign in to participate in the conversation

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).