• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: July 26th, 2023

help-circle

  • fish links against pcre2, which is a C library (via the pcre2 crate).

    (it used to also link against ncurses, now it uses the terminfo crate instead, which just reads the terminfo database in rust)

    Of course there is a way to make fish distributable as almost a single file (https://github.com/fish-shell/fish-shell/pull/10367), which rust does make easier (rust-embed is frickin’ cool), but these sorts of shenanigans would also be possible with C++ and aren’t really a big driver of the rust port. It’s more that cargo install would try to install it like that and so why not make that work?

    Really, my issue here is that the article makes “making fish available on servers” this huge deal when fish has always been available on servers?


  • There is nothing specific in the rust port that makes fish more available for servers or LTS distros.

    Before, you would have had to get a C++11 compiler (which used to be a bit of a PITA until 2020 or so), now you need to get rust 1.70 (which isn’t terrible given rustup exists).

    I see they’re taking it from this comment, which says

    Fish should be available on servers, which run old LTS distros - this means we build our own packages for a variety of them.

    Which is something that fish has always done - you can go to https://fishshell.com/ and get packages for Ubuntu, Debian, OpenSUSE and CentOS - all server distros, and these packages are built by the fish developers, not the distros.

    That quote comes from the “Setting The Stage” section of the comment, which describes the status quo. This is about explaining what fish does and needs from a new language, not about something that fish wants to achieve by switching the language.



  • One big, long-standing issue is that fish can’t run builtins, blocks or functions in the background or at the same time.

    That means a pipeline like

    seq 1 5 | while read -l line
        echo line; sleep 0.1; 
    end | while read -l line
        echo line; sleep 0.1
    end
    

    will have to wait for the first while loop to complete, which takes 0.5s, and then run the second.

    So it takes 0.5s until you get the first output and a full second until you get all of it.

    Making this concurrent means you get the first line immediately and all of it in 0.5s.

    While this is an egregious example, it makes all builtin | builtin pipelines slower.

    Other shells solve this via subshells - they fork off a process for the middle part of the pipeline at least. That has some downsides in that it’s annoyingly leaky - you can’t set variables or create a background job in those sections and then wait for them outside, because it’s a new process and so the outer shell never sees them.