Some services run really good behind a reverse proxy on 443, but some others can really become an hassle… And sometimes just opening other ports would be easier than to try configuring everything to work through 443.
An example that comes to my mind is SSH, yeah you can use SSLH to forward requests coming from 443 to 22, but it’s so much easier to just leave 22 open…
Now, for SSH, if you have certificate authentication or a strong password, I think you can feel quite safe, but what about other random ports? What risks I’m exposing my server to if I open some of them when needed for a service? Is the effort of trying to pass everything through 443/80 worth it?
Presuming you have not limited edge port 22 to one or two IPS and that you are not translating a high port to 22 internal, the danger is that you are allowing the entire internet to hammer away at your ssh. If you have this described setup, you will most definitely see the evidence of attempts to break in in your SSH endpoint and firewall logs.
Zero days for SSH do exist, so it’s just a matter of time before you’re compromised if you leave this open.
This is security theater
Flaws in SSH do happen but they are very rare. The solution to this is defense in depth which is different than security by obscurity.
My comment was not a recommendation. OP asked what the danger is of leaving short open on port 22.
Also, tools are not secure. Their implementations are secure.
It just widens your attack surface for the ghost army of bots that roam the net knocking on ports, you don’t want to be someone else’s sap. I would imagine most home attacks fall into three categories: script kiddies just war driving, targeted attacks on someone specific, or just plain ol’ looking for sensitive docs for identity theft or something. Its still the net, man. If you leave your ass hanging out someone’s gonna bite it in a new way every time.
Move the port to a high port. Install fail2ban and set it to ban quickly. The downside of that is if you fat finger your login more than a couple of times it might ban you. I have whitelist on mine of the IP addresses I know I will be logging in from. I also run TCP wrappers which far too many people screech about it being depreciated. it works and also if set up properly logs all login attempts. I get about three or four a month on my random high port. Of course most of this depends on you trying to gain access from known addresses or subnet.
I only have the ssh login as a backup. I run wireguard with the ports set to something other than the default port. It allows me to gain access to my home network quickly. While its always possible there might be some bug that would allow someone to access it in the future it works as well as any other solution.
Not a sysadmin, just a casual IT.
If it is open, it is going to get hit by scanners, scrapers, everything and the sun, even if it is secure. Generally, 443 for your websites via reverse proxy with an IP whitelist + password is okay. Not special, lets you add subdomains, very convenient.
Now, there isn’t anything special about any given port, but you still need to have some form of access control that you need to set up. If it is an API have some sort of API key in place. Implement 2FA. Try to isolate the service from the machine. Isolate the machine from bare metal. Keep the bare metal machine isolated from your home network. Take up farming. Change the default port and add some form of access alerts/logs. Have some sort of fail2ban service in place because you will be firehosed with scripts and bad traffic.
Maybe some of the stuff I recommend is paranoid overkill, but I don’t know enough to cut corners. Security is a hassle, a breach is a nightmare.
IP whitelists are not terribly secure and are quite a hassle.
Instead use a overlay VPN or some sort of extra security layer like mTLS or Authelia
Authelia
Seems interesting…
It is not just a matter of how many ports are open. It is about the attack surface. You can have a single 443 open with the best reverse proxy, but if you have a crappy app behind which allows remote code execution you are fucked no matter what.
Each port open exposes one or more services on internet. You have to decide how much you trust each of these services to be secure and how much you trust your password.
While we can agree that SSH is a very safe service, if you allow password login for root and the password is “root” the first scanner that passes will get control of your server.
As other mentioned, having everything behind a vpn is the best way to reduce the attack surface: vpn software is usually written with safety in mind so you reduce the risk of zero days attacks. Also many vpn use certificates to authenticate the user, making guessing access virtually impossible.
Get a WAF. Sophos firewall is free if you want to diy. If not, use cloudflare.
Opening ports, logging, monitoring, nailing up allow listed IP addresses and dicking around with fail2ban is such a timesuck. None of that crap will stop something from exploiting a vulnerability.
Some things are worth farming out to a 3rd party. Plus, you can just point your DNS entry over and be mostly done. No more dynamic IP bs.
A WAF won’t magically solve your problems and free you from your attack surface. To be effective it needs contect of the application and a lot of tuning. Your public facing services should be treated, configured and maintained as such. I am not sure if you include a WAF in the stuff that won’t stop exploitation of vulns, but it definitely belongs there. Yes, it can decrease volume and make exploitation a bit harder but that’s it usually. Also don’t just include proprietary third party stuff and hope it solves your problems.
It isn’t a magic solution, no, but you have a lot more control than crummy layer 3 firewall rules and endless lists.
The big players have far more data about what bad looks like. Either we can play whack a mole with outdated tools and techniques or get smart and learn to use what is available.
Self hosting doesn’t mean we go backward in terms of the sophistication and difficulty, it means embracing modern solutions.
In the dinosaur days, we had primitive tools, but so did the attackers. We cannot hope to self host with any measure of security if we bring piss to a shitfight.
I tested WAFs in the past, also ones from the big players and while they might block some cheesy stuff on the application layer, as long as they are not heavily tailored towards your application, they stop bein effective against most manual stuff.
Everything lower than application layer ist not a WAF btw, so I am not sure if you mean WAF or some Firewallish stuff.Just stick to best practices and expose only what you really need to expose. When putting third parties in front of your stuff this als has data protection implications. If using it makes you feel better okay but it should not feel you more secure if you expose vulnerable stuff.
There’s definitely nothing magic about ports 443 and 80. The risk is always that the underlying service will provide a vulnerability through which attackers could find a way. Any port presents an opportunity for attack; the security of the service is the is what makes it safe or not.
I’d argue that long tested services like
ssh
, absent misconfiguration, are at least as safe as most reverse proxies. That doesn’t mean to say that people won’t try to break in via port 22. They sure will—they try on web ports too.Personally I don’t forward ports for anything that only I am supposed to access (such as SSH). Instead I connect to my home network via VPN and establish the connection from the inside. I just have an allow all from the VPN subnet to my main one, but you could also allow things selectively if you don’t want everything accessible via VPN. Using the VPN has the added bonus of ensuring everything is going through a secure tunnel if I’m connecting from a public network.
It’s not so much about the ports, its about what you’re running that’s accessible to the public.
If you have a single website on 443 and SSH on 22 (or a non-standard port like 6543) you’re generally considered safe. This is 2 services and someone would need to attack one of the two to get in.
If you have a VPN on 4567 and everything behind the VPN then someone would need to hack the VPN to get in.
If you have 100 different things behind 443 then someone just needs to find a hole in one to get in.
Generally ssh, nginx, a VPN are all safe and they should be on their own ports.
Sorry to nitpick but I feel like beimg precise here is important. Nginx is a project, ssh a protocol and VPN an overlay network, so more of a concept. All 3 can be run somewhere on the spectrum between quite secure and super insecure. Also safe and secure are two different things, I guess you meant secure so no big deal.
Exposing SSH is not recommended, it’s a hot attack target. Expose a VPN and use that to SSH in.
Or use port-knocking for SSH.
While this helps getting volume down it just adds a layer of obscurity and the service behind should still be treated and maintained as if it was fully public-facing.
I think people get too defensive about security by obscurity not being security. It’s still better for things to be obscure, it’s just not sufficient. A hidden lock to open a door is marginally better than a lock on the door. A hidden button to open a door isn’t secure though, of course.
But at the same time, I fully understand why it’s stressed so much. People tend to make analogies in their mind to the physical world. The digital world is so different though. An example I use often is you can’t jiggle every doorknob in the world to see if it’s unlocked, but it’s (relatively) easy to check every IPv4 address for an open port to some database with default credentials.
Security through obscurity is hammered into newbies as being bad because it’s often a “quicker and easier” solution and we don’t want anyone thinking they could just do that and be done with it.
You have to learn the proper way to do it; obscurity only buys you time. Maybe.
while the most bare bones knocking implementation may be classed as obscurity, there’s certainly plenty of implementations which i wouldn’t class as obscurity.
Does this method use a cryptographically secure secret which is transmitted encrypted? If not, it is obscurity. If yes, just use normal secure authentication if your goal is security. If you want to get volume down and maybe reduce your risk, feel free to use such things but you should not apply the security label to it.
would you classify out of band whitelisting by IP (or other session characteristic[s]) as having no security merit whatsoever?
would you classify it as purely a decision regarding network congestion & optimisation?
you’re ofc free to define these things however you wish, but in a form which is helpful to OP’s question i’m not sure i follow you.
I just wanted to make clear that port knocking is obscurity and maintaining and configuring your still public facing services in a secure manner is essential. There are best practices which I did not define and are applicable here.
If you whitelist your IP that of course helps but I am not sure what that has to do with port knocking. Whitelisting an IP after it knocked right, that would be obscurity. Whitelisting an IP after it authenticated through a secure connection with secure credentials? Why not just use VPN?
I am also not directly commenting on OPs question, as I try to tackle missconceptions in the comments.
Everything you expose is fine until somebody finds a zero day.
Everything these days is being built from a ton of publically maintained packages. All it takes is for one of those packages to fall into the wrong hands and get updated which happens all the time.
If you’re going to expose web yourself, use anubus and fail2ban
Put everything that doesn’t absolutely need to be public open behind a VPN.
Keep all of your software updated, constant vigilance.
Imagine opening all the windows in your flat. Then leaving them open for a month. What would happen? How many insects would make their new home in your home? How many critters and cats would do the same?
Now, each window is a port. Your flat is your network. Each critter or cat is a bad actor. Each insect is a bot or virus.
To expand on this a bit:
A lot of attacks are automated since the goal is to compromise as many hosts as possible. These hosts are then used in a botnet or sold to people on shader websites to use as proxies.
This is an awful analogy…
Firewalls, containers, separate subnets (or VLANs if possible), VPNs.
Keep really public stuff in a VPS though, and the private in your home server. Connect them via wireguard (using e.g. headscale).With SSH it is easier to do key authentication. Certificate authentication is supported but it is a little more hassle. Don’t use password authentication as it is deprecated and not secure.
The key with SSH (openssh specifically) is that it is heavily audited so it is unlikely to have any issues. The problem is when you start exposing self hosted services with lots of attack surface. You need to be very careful when exposing services as web services are very hard to secure and can be the source of a compromise that you may or may not be aware of.
It is much safer to use a overlay VPN or some other frontend for authentication like mTLS or an authenticated reverse proxy.
Be sure to keep everything up to date too. Even openssh has had multiple vulnerabilities just this year.
Always good advise
However, OpenSSH is pretty solid security wise. https://www.openssh.com/security.html
Note: it is best to check the official security pages instead of random websites.
Better yet: compile from source and keep features to a minimum.
Applies to any package really.Vendors packaging OpenSSH open up even more vulnerabilities that the devs of OpenSSH can’t protect you from. See the recent xz poisoned OpenSSH packages
Opening ports essentially allows other computers on the internet to initiate a connection with yours.
It’s only dangerous if a service running on those ports can be exploited.
“If” is not the correct word choice. It’s only dangerous when a service on the port gets exploited.
Driving a car is only dangerous when you die in a traffic accident.
your logic doesn’t check out.
If it’s exposed to the internet it’s a matter of when, not if, it is compromised.
to reduce attack-surface, if there’s no reason for the port to be open, don’t open it.
This, coupled with the fact that firewalls are protocol-agnostic. You can, for instance, use ‘port https’ in your Packet Filter config instead of ‘port 443’, but that simply means that PF will block/pass traffic to whatever service is bound to that particular port, and NOT https connections in general.
About 5 years ago I opened a port to run a test.
Within hours it was getting hammered (probably by scripts) trying to figure out what that port was forwarded to, and trying to connect.
I closed the port about a week later, but not before that poor consumer router was overwhelmed with the hits.
I closed the port after a week. For the next 2 years I’d get hammered with scans occasionally.
There are tools out there continually looking for open ports, they probably get added to a database and hackers/script kiddies, whoever, will try to get in.
Whats interesting is I did the same thing around 2000 with a DSL connection (which was very much a static address) and it wasn’t an issue even though there were fewer always-on consume connections.
The only ports I have open are 80 and 443, and 80 just redirects to 443.
I also have a BeamMP server that has to have a port open because that’s just how it works, but that VM sits on its own DMZ’d VLAN, and I only open the port when I’m actively playing the game.