Drives me nuts that somewhere along the devops journey people decided that SSHing into a private server used for internal tools is an antiquated and outrageous thing to expect. People for some reason are actually excited about the prospect -- "we're gonna make it so you never have to SSH!". Little do they know that I like SSH. A lot more than I like clicking on the AWS console. And then somehow we're expected to debug the bastard using logz.io or similar -- unconscionable to me but maybe I'm old
Not wanting to use SSH has nothing to do with not liking it. It's about getting your infrastructure to a state where everything you need to manage and troubleshoot it can be done _without_ SSH. This doesn't mean replacing it with click-ops, but having robust deployment and automation in place, an external and centralized log/metrics store, and everything else that minimizes the need to manually manage it at all.
Additionally, removing SSH means removing a large security risk, the need to create and rotate keys, and all the associated mess that we take for granted. It's a huge operational and security burden.
We also have to deal with ridiculously complex permissions structures in cloud dashboards. This is also an operational burden, and certainly a security burden if you get it wrong...
The idea is to reduce the burden and simplify operations. You likely already need cloud dashboards for other things, and you really can't avoid that. You also can't remove SSH without improving automation, but once you do, you'll realize you simply don't need SSH at all.
A sibling comment mentioned cattle vs pets, which is related to this. If servers are not valuable, can be quickly recreated, and the infrastructure is self-healing, that means that you've done all the hard work, and don't need SSH for taking care of your pets.
I think the use of IDEs really made this practice less simple. Terminal mode emacs and vi/m are simple with SSH, VSCode and other things start getting more complex.
I think the extensions to VSCode and others to run remotely with a browser are starting to bring some of this back into fashion.
Connecting over SSH with VSCode is the most convenient SSH experience I've ever had: VSCode runs locally, connects via SSH and makes it feel like I'm working locally. I can edit files, open terminals, run software, etc. It's like running vim, a couple terminals and a scp file manager, but with the convenience of a desktop application
Technically you need an extension (Remote - SSH) but I think that's installed by default. It runs some remote code on the server to make it more seamless, but that has never gotten in the way.
Unless something has changed, VSCode runs in a server / client model with the FE on your local machine and the code / language servers running remotely (I don't use it, I stick to neovim).
VScode has connect to remote server via ssh mode built in, and there's an extension to use a docker container on the server, making it pretty easy to provision dev environments for new developers.
Completely agree that ssh is simpler than clicking on a web console.
A good reason for not relying on ssh is 'cattle vs pets'--using ssh administration can make snowflakes where changes aren't tracked and diagnosable/reproducible. With versioned deployment of system changes you always know how things got to be and can configure many to be the same.
I find it a lot easier to manage a single Linux server than multiple AWS services and a separate third party service for every single thing. Stuff built on top of AWS like Heroku is even worse.
The problem is that a lot of people are now just not comfortable running things in-house. Subscribing to another service and adding an integration feels like the safe option.
I think it's because a lot of people don't actually know how to get full use out of a terminal. When you have to manually enter ssh commands and credentials every single time, browser-based solutions become very attractive.
it's just a bit sad that there was never a proper port knocking protocol established, which had features and was provably secure in theory and implementation, than no one would have issues de/activating ssh per on need-basis
scp remote.host:path.txt local
scp local remote.host:path
Need a fast proxy to browse the internet securely and bypass restrictions without a VPN? SSH SOCKS proxy! Supported by most operating systems, but I tend to use it directly via Firefox so that it is isolated to one browser. This starts a socks proxy on the desired port:
ssh -D $port $host
Firefox has network settings, simply choose SOCKS, 127.0.0.1 as the IP and the specified port as the port. Then you're off to the races.
Until we had proper VPN infrastructure to enter our VPC at AWS I would utilize this to view private vpc-only RMQ dashboards.
> Need to get a file from one box to another? Have SSH? `scp` is your friend
For less trivial transfers (recursive structures, large collections that you want to sync with minimal time) rsync supports SSH as a transport medium out of the box so
scp remote.host:path.txt local
scp local remote.host:path
becomes
rsync remote.host:path.txt local
rsync local remote.host:path
for the basics, then start adding the other options that you need:
rsync some/local/dir me@remote.host:/target/ --recursive --times
rsync some/local/dir me@remote.host:/target/ --archive # archive does recursive and also preserves times, ownership, etc.
rsync some/local/dir me@remote.host:/target/ --recursive --delete-before --dry-run
… … …
In fact, I usually end up using rsync even for the basics, just out of habit typing that instead of scp.
I must admit my rsync experience is limited. I do not know the behaviors and flags very deeply. It's on my list though, as it's surely another one of those foundational linux tools. I know it is even faster than cp on the same host for some situations.
In my experience, rsync is a tool that's worth it despite not knowing most of what it can do. I use `-a` to get the behavior you want (which is superset of a bunch of options like preserving filesystem permissions, copying recursively, and other things I haven't bothered to memorize) `--info=progress3` to show progress/network speed updating on a single line (no idea what the other progress options do, but this one does what I want), and if I'm transferring something that's not already compressed, I use `-z --compress-choice=zstd`. After those flags, it's basically a drop-in replacement for scp (`remote:srcpath dstpath` for copying to local and `srcpath remote:dstpath` for copying to remote).
The basics for just copying files as you would with scp are pretty intuitive:
rsync source destination
or
rsync source destination --recursive
with the only extra gotcha being that you need to be careful whether you need trailing /s on directories (sometimes where CP or scp would complain and exit, rsync will do something that isn't quite what you intended).
The first complication you might hit is when you want to sync, including deleting files at the destination as needed, rather than just copying. This isn't _that_ complicated really, but caution is understandable because mistakes can cause unintended deletions. The --dry-run and verbose output options are your friend here, and the assurance that it won't delete anything unless you explicitly give it one if the --delete* options.
rsync is the tar of network transfers. It is extremely useful, but has too many options, so one ends up memorizing a few use cases and carefully sticking to them.
I still have PTSD from the time I played with the options and made the mistake of swapping source and destination, where the destination was a blank disk.
--dry-run is a useful safety net when trying new combinations of options. That, and the fact it'll never delete unless you specifically give it an option with "delete" in its name.
SCP (what you showed as an example) and SFTP (what you linked to a Wikipedia page for) are not the same thing. SCP is an older protocol and, while faster in some situations, is essentially abandoned. Newer versions of OpenSSH actually use SFTP even if you use the SCP command, so you might as well just use the SFTP command instead.
No, that is not correct. The SCP command was historically for a separate protocol, Secure Copy Protocol (and prior to 2022, it actually used that protocol). SFTP is Secure File Transfer Protocol. They are two separate things, and conflating them because of a compatibility layer is simply wrong.
It's like saying the classic reboot utility and systemd are the same thing just because modern Linux distros symlink the reboot command to systemctl. Or, to stretch a little further, like saying HTTP and HTTPS are the same thing just because most websites redirect one to the other.
There's no justification for spreading confusion and relying on a compatibility behavior for an obsolete command. That compatibility behavior is not present on pre-2022 distros, and may be removed again in the future once people are expected to have updated their scripts & whatnot. You're evangelizing SFTP, just use the SFTP command.
> classic reboot utility and systemd are the same thing just because modern Linux distros symlink the reboot command to systemctl
You are acknowledging my point - in this case they ARE the same thing. If it looks like a duck, quacks like a duck, and walks like a duck - it's a duck. This is like arguing symlinks are wrong and instead of `python` you must use `python3.12` even though they are ultimately identical. Read the man page for `scp` and read the man page for `sftp` and seriously tell me why I should use one over the other.
> That compatibility behavior is not present on pre-2022 distros
Who is using a pre-2022 distro reading this comment? And I have been using scp (with SSH, SFTP) for a lot longer than since 2022. Like over a decade.
Your comment reads like an HOA enforcer. Just raising fuss for no actual benefit.
> You are acknowledging my point - in this case they ARE the same thing. If it looks like a duck, quacks like a duck, and walks like a duck - it's a duck.
Ok, so two programs with different code that perform the same task are identical to you. There are never any merits or drawbacks on implementation according to you.
All the extra features that SFTP has but SCP doesn't don't exist, because "looks like a duck, quacks like a duck." SCP being faster but not allowing resuming interrupted transfers doesn't matter, because "looks like a duck, quacks like a duck."
If this is the level of thought you put into your tech, you don't seem worth arguing with, but you also don't seem like someone who should be "teaching" anything to anyone in your own comments.
> Who is using a pre-2022 distro reading this comment?
Plenty of enterprises. That's only two years ago. People still use systems far older than that.
This does include Ubuntu 22.04 LTS, which shipped OpenSSH 8.9. It would also include RHEL 9, which shipped OpenSSH 8.7, if Red Hat hadn't specifically patched it with the updated behavior.
> And I have been using scp (with SSH, SFTP) for a lot longer than since 2022. Like over a decade.
You probably weren't, you were probably using SCP. Optional SFTP support wasn't even added to the SCP command until 2020, committed in 2021, so "over a decade" is just factually impossible: https://github.com/openssh/openssh-portable/commit/197e29f1c...
This is why you need to know these things, because now you kind of look like an idiot for not even knowing the tools you use.
My company's VPN had some weird quirks with routing and DNS that were preventing me getting to certain web frontends, and being able to do a quick SOCKS proxy to get my Firefox accessing the corp network as if it was doing so from a machine in the office was a lifesaver.
sshuttle[0] is a great tool that turns a regular SSH connection into something that behaves more like a traditional VPN.
e.g. I can run "sshuttle -r myserver example.com", and when I next load example.com in my web browser (without any special configuration) it'll be routed via `myserver`.
I don't see how this gets around a VPN? I still need a host in your example. That's what I'm paying a VPN for, a bunch of hosts around the world that are relatively fast.
Had a family member stationed overseas with the military. Bank of America chose to geo-lock IP access to their customer website. Used SSH box in the USA and informed them on how to configure their laptop to get around the restriction. No need to pay for a VPN or configure one for a single use. Ironically Bank of America had a local branch at the overseas military base.
SSH is the best network service multi-tool. From remote shell, to file transfer, to network routing.
VPN != VPN. You're thinking about "paid proxy for e.g. piracy", they're thinking of "tool to make computers in two different locations believe they're on the same LAN". Both are the same technology under the hood, but the use differs.
In this case you've got a network with some devices on it in location A, and a device in location B and you'd like to make B believe it's on the LAN of location A. Both a traditional VPN such as wireguard or OpenVPN and SSH with SOCKS can accomplish this.
I used a server running in AWS for this while in China. Sometimes you don’t have a VPN, or using certain VPN providers might be heavily monitored and/or blocked.
My roommate was visiting family in China a few years back and I did the same thing. Got him sorted with SSH access to one of our servers and he was able to SOCKS proxy to bypass all of their unhinged filtering.
An alternative I am seeing mentioned with some frequency is Tailscale, which doesn't need port 22 open to the internet, since it's using its own network's connectivity to facilitate your "tailscale SSH" connectivity. From what I read it's very similar to Amazon's SSM Agent.
The usefulness here is that you're closing off ports and reducing your exposure, the downside is that you need proprietary agents installed on the remote devices, and clients (tailscale itself or the SSM extension to AWS CLI) on proprietary networks doing the routing for you. Which might be perfectly fine for your use cases.
I've done even less reading but is Cloudflare's WARP client the same thing for their own network?
It's a proprietary network but not proprietary agents (except the bits specific to proprietary platforms).
One handy feature this enabled is that you can include their open source go library in your program and avoid needing to install anything besides your own binary.
Isn't there even an open implementation of the network/coordination layer (Headscale)?
But regardless, the best term I've heard for these types of networks is overlays. You can add "zero trust" or whatever to make it seem more fashionable, or VPN if you're old school. I remember using Hamachi for this type of use back in maybe the early aughts.
One of the major issues was that a VPN client requires additional configuration and software. Connectivity is the least of the problems they are trying to solve.
Tailscale is just a commercial service that builds upon wireguard. It automatically generates certificates for each of your devices, ensures they're rotated and up to date, automatically configures routing and DNS between your devices and offers some additional functionality.
Tailscale has open source clients but a proprietary server to do this, but you can use the open source alternative headscale instead: https://github.com/juanfont/headscale
Convenience. you're free to run wireguard yourself, but that's a lot of faffing about with config files that some people don't want to do. and then on top of that, they have clients for mobile and eg AppleTV. that may be outside your use case, but some find it handy.
You can do all sorts of things with SSH. My favorite is SSHFS, which is couplings for SFTP that treat it like a proper filesystem, and it works on everything that uses SSH. Quicker to setup than a VPN and SMB, and about as secure (you could also theoretically use PAM to authenticate with LDAP or newer MFA protocols)
I use SSHFS daily with my NAS on my home network. It's far faster and simpler to administer than SMB or NFS, with way less config overhead and AAA complications than the other two.
Unfortunately, it is currently semi-abandonware: https://github.com/libfuse/sshfs/blob/eadf7f104a479f0313ecd4... It works well enough for me, but there are certain issues to be aware of. For example, listening for file changes via inotify doesn't work-- and it's a double-whammy of neither SFTP nor FUSE supporting that, so it's unlikely to be fixed any time soon.
SSH port forwarding is one amazing aspect of this software. For one example, you can develop on a remote system by forwarding your local port 3000 to the remote 3000, using something like `ssh -nFL 3000:localhost:3000 user@remote`, all while going through SSH! It's an indispensable tool for modern development.
You can also simply use `ssh -X`/`ssh -Y` and use your local X/Xwayland server for the remote app. This way you can start remote GUI apps like local ones from your shell.
I was running emacs on a machine in the cloud on the east coast, tunneling to my X server on my laptop on the west coast. No visible keystroke latency. Only noticeable latencies were copy/paste and initial startup but those were only bad enough to be noticeable, not bad enough to annoy me.
I wish that SSH would be disaggregated further. SSH has become the suite du jour for file transfer, remote access, and a handful of other things. Unfortunately, simultaneously, innovations have been made in transport protocols and elsewhere in the stack that we're unable to take advantage of.
SFTP is a great example of a protocol which has a discrete server (look! There's sftp-server on your computer. Nothing prevents it from running over TLS, or a web socket). I wish that this was the way the entire suite worked. I wish the multiplexing, and underlying shell implementation was transport agnostic (perhaps relying on SOCK_STREAM, or SOCK_SEQPACKET semantics), and the authentication, encryption, etc was its own thing.
I don't think that requiring a VPN to use SSH is good advice for big organizations, at least in terms of usability. My uni's (Leeds) comp sci department had this and it was extremely unpleasant to use. While it is "better" from a technical standpoint, I had peers who paid for private compute time instead of using the uni's free clusters. The reality is that even undergraduate comp sci students often don't know enough IT/sysadmin stuff to be able to figure out setting these things up and there's often a benefit to making the barrier to entry lower.
Something that few people remember is that if you have access to a filesystem through SSH, then you can have a remote Git repository with no configuration!
In the remote machine, you only need to create a bare repository:
git init --bare
And in your "client" machines you use it like any other remotes:
For quite a while before I built my homelab, my git server was a flash drive plugged into my OpenWRT router.
Honestly I still kind of prefer that to gitlab et al. It's nice to not have to leave my terminal to setup a new repo. It takes so much more effort to log into a website and dismiss a bunch of notifications before I can click even more buttons to create a new repo.
I like having all my repos accessible through the website, but I really just want to create new projects through ssh like a civilized person.
I often find out that SSH is blocked in a lot of networks. This is frustrating, since I usually clone git repositories trough SSH (and I'm considering for this reason switching to HTTPS), since I find stupid having to use a VPN just to work with git.
If you really want to have something that gives you access to remote resources in any network the only solution is to use a VPN over TLS on the port 443, to make it impossible to distinguish it from any other normal HTTPS traffic. This is the reason why I run an OpenVPN server at my company, where normally I use Wireguard that is more performant (but it's blocked in a lot of networks).
At the end of the day port 443 with TLS traffic on it is the only thing that is guaranteed to not have been blocked (on port 80, 25, etc firewalls may check that you are effectively transferring HTTP traffic, they could not on 443 since the traffic is encrypted, tough a smart firewall can assume from traffic patterns that the connection is unlikely HTTPS, to this day I've jet to se a firewall this smart).
Default SSH is with certificates, passwords not used. I like that.
Hard to brute force a certificates.
In the old flat 10.X.X.X network amazon days - your new hosts were absolutely hammered when being brought up. There must have been folks on the amazon network itself just portscanning like crazy.
I see this mentioned a lot. But is it any harder to brute force a 2048 certificate than a 2048 bit password? (Note: A 2048 bit password with base64 character set is 342 characters long)
Don't get me wrong, there are many advantages to certificates (server doesn't learn the secret, easier to enforce strong secrets, ability to issue centrally, ...) but there is nothing magically different between a certificate and password when it comes to resistance to brute force. If you generate a high-entropy password you are fine from this point of view.
At least in government there often were weird limits on password length (ie, 8, 40, 72 ) or in tooling around passwords. Certificates seem to force allow things like 2048 bits. Didn't Red Hat have a default at 8 for a while??
Yeah. But if you can afford to store a certificate of that size you can afford to store the password. Also worth noting that for most types of certificate (like RSA where 2048 bit keys are common) there is much less than 2048 bits of entropy. So in practice the numbers probably end up much closer.
What about an IP whitelist managed on some other website (say in AWS). If you need remote access while you are travelling, you login to that website, which will add your current IP to the whitelist. The server refreshes its firewall with the new whitelist every 5 minutes. So within 5 minutes you get access.
That creates another layer of protection (authentication to the website). I would assume a linux firewall is very hard to bypass so almost as good as not exposing the server to the WAN. And doesn't have all the problems and complexity associated with VPNs, works on any device from anywhere.
Apart from the complexities added by having to build this in the first place, this works, but requires a desktop environment, which is not true of ssh.
You could make the argument that this could be an off-the-shelf product you simply install, but then the default port for it because as big of a target as port 22 as soon as it becomes commonplace enough, except you don't have 20+ years of open-source security research in it, and now you're relying on AWS not being now in your region to connect to your servers.
Agree but even if you compromise the white list, you still need to get over ssh's own security, so I see that as adding another lock rather than substituting it, and mostly as a protection against someone scanning the IP address space when a new unpatched zero day surfaces.
As for the desktop environment, you can add an API to the website.
I didn't know that existed. Seems like a good alternative, thanks. I prefer the whitelist for my own personal usage as it applies to all my servers/VM in one go for all ports. But agree that's a simpler solution if you have a single port+machine to protect.
One neat feature of OpenSSH server is the AuthorizedKeysCommand config, which lets you fetch (or generate!) a user's keys from anywhere, e.g. a curl response.
With this you can easily set up a centralized SSH keys system without the pitfalls of decentralized systems or running a CA. Have the user register their public key on your website in the typical fashion, and then write a simple secure endpoint and use AuthorizedKeysCommand to instantly integrate all your OpenSSH servers.
It also lets you implement more exotic authorization schemes with the full capabilities of your internal backend, which is often a million times more enjoyable than fighting through Linux PAM.
If you use SSSD and LDAP and don't like the idea of relying on a curl every login, you can also centrally-manage the keys in LDAP for a similar effects.
My public key is at http://GitHub.com/fragmede.keys, so anyone wanting to let me on their server can just create me an account, stick that in ~/.ssh/authorized_keys, tell me an IP address and port number, and I'm in.
SSH is great but rapidly becomes less so once companies decide to put their servers behind some times cascaded SSH gateways, so you need to know which magic invocation of jump hosts to put in to have your connection go through anyway.
Sure, but the setup really sucks for transitive jump hosts. It would be cool if SSH supported bang paths to access a remote host via various jump hosts, like `foo!bar!baz@quux` to access `baz@quux` by jumping through foo and then bar.
it has been for 20 years IIRC. its designed to be a general purpose way to do remote access and command exec, with auth/auth. like thats its whole ballgame. I think the confusion comes when people assumed it was merely a successor to telnet.
but yes this is why its so important for OpenSSL client and server sides to be as bulletproof as they can. its a giant worldwide SPOF and therefore a drool-inducing pinata for hackers.
Exposing SSH to the world really bothers me. Personally I firewall to my own IP when connecting to EC2 instances, it's a pain but I don't feel comfortable knowing there may be zero days out there, plus I don't want my CPU cycles wasted by script kiddies.
It seems like there should be a better solution - something like port-knocking but done properly.
Why? OpenSSH is probably the most battle test piece of software on the planet. You have to make sure you run it on a similarly hardened operating system though.
That's nice and all, but as TFA already mentions, ssh has a large attack surface. The most critical one though is that you usually grant people access to a shell, so if an account gets breached, you need to worry about local root exploits, which are actually pretty common.
Also, believe it or not, setting up key-based authentication is quite the challenge for a lot of people, especially if you demand encryption of the private key and setting up an agent. However, you cannot enforce private key encryption server-side, so you can't even guarantee some kind of 2FA is in place. Yes, ssh does nowadays support FIDO, but that's even more complicated for users...
> However, you cannot enforce private key encryption server-side
If you could do this, how would it work?
If you could use the -sk key types or something similar with devices that are built into most computers (like TPM or the Secure Enclave) that would be nice.
> you can't even guarantee some kind of 2FA is in place. Yes, ssh does nowadays support FIDO, but that's even more complicated for users...
Can't you limit the allowable keytypes to those FIDO ones (currently ecdsa-sk and ed25519-sk)? There's also the old ChallengeResponseAuthentication stuff.
OpenSSH has seen one hole recently (and the failed xz attempt) and somehow SSH is less safe than a VPN? How the VPN configured? What is the client OS people are using to connect to the VPN? What's the track record security wise of the various VPN offering?
The "SSH has a wide attack surface as seen by RCE ..." is a bit dishonest IMO. How s any VPN more secure?
If you want configure SSH to be pubkey only and hand over Yubikey to your users.
I wonder: in all the recent data leaks leaking billions of users data where attackers were inside company's networks (and not just on online facing servers), was it through SSH holes that these attack too place? Or are we talking about a corporate culture of Windows+VPN?