I hate how Signal went down because of this… Wish it wasn’t so centralised.
Started moving to Element/Matrix this weekend when I attended a protest and wanted to have some kind of communication, but also wanted to leave my primary phone at home. I was using a de-googled android fork and an e-sim, but being a data-only e-sim, I couldn’t use Signal due to the phone number requirement.
Annoying to have try to get contacts to get another app, but at least it’s decentralized and comes with the option of being self-hosted once I’m ready to tackle that.
@MrMcGasion @Sunny Come to the dark side (xmpp, and jmp.chat) and get decentralized messaging and SMS support with that data-only sim!
Matrix is way more dark side in my opinion (this is a joke about how often it fluffs up, leaving you in the dark).
@Sunny@slrpnk.net already has an XMPP account, as that is included in every slrpnk.net account automatically. It is very easy to set that up for most Fediverse software, and the user id is identical between Fediverse and XMPP.
Oh damn i did not even know about this! I will defo have a play around with this tomorrow, how very neat!
However, it isnt me im really worried about in the grand picture, its family and friends. It was already difficult enough to convert them to using Signal.
Hey, note that you can use mautrix-signal to access your Signal account within Element on this phone.
I have been able to use Signal like any other day. I haven’t seen any disruption in sending or receiving.
For me it was not possible to send or receive messages for a couple of hours.
Signal’s love affair with big tech is deeply disturbing.
My friend messaged me on Signal asking if Instructure (runs on AWS) was down. I got the message. That being said, it’s scary that Signal’s backbone depends on AWS
Why is this scary? That’s what e2ee is for, so that no one besides your recipient can view the contents of a message. It does not matter which server is used. If anything for a service like Signal, you want a server with high availability like AWS, Azure, Google Cloud or Cloudflare.
Scared because it’s centralized. If Amazon decides that it wants to shut Signal down, they can. Nobody can spin up a Signal instance and help out.
I would be surprised if Signal didn’t have a contract with another cloud provider as well, incase of this sort of thing.
I hope they consider other ways of doing things after this incident.
Willing to bet a lot of companies will be considering that now lol. Will it actually happen though? ¯\_(ツ)_/¯
Good
Yeah, was reading about it here too
Ring doorbells, Alexa, ahh… the joys of selfhosting.
Is there no way to check the doorbell video locally?
An Amazon employee misconfigures something and now your doorbell doesn’t work
Obligatory
Oh wow their front page doesn’t mention at all that their products run locally and don’t require subscriptions.
It mentions push notifications and emails, so I guess they must require an account, or can you configure them to use SMTP directly, as with the Amcrest Pro cameras?
TBH, I’ve never used any of those features. I just used it locally and plugged it into home assistant.
But I just reinstalled their app and can confirm I can watch the feed and get push notifications without a cloud account. Haven’t tried email tho
That sounds worth investigating, thanks! Amcrest needs an account for notifications afaik, but the Pro cameras can work just on a local network.
The app for them is awful. Then they made a new version that is awful in slightly different ways, so I’m interested in new options.
The last time I looked at time I missed something. I want to use my existing chime and have it also use that power supply as a power source. Turns out the battery one does that! I missed that months ago! I was going to get a Ring Pro or whatever to get it to do that! This is so much better! Moving up the todo list now!
Local, private, no subscriptions, ONVIF, and no need to actually self-host anything. I haven’t found any other options with that combination.
I would be very surprised if there was
I don’t have one (because of that point), so I don’t know…
Presumably the app and doorbell are hardcoded to go to an AWS URL (so it’s “easier” for consumers), but in theory the data’s all on your wifi.
The one that hits us in self hosted is https://auth.docker.io/
You guys don’t selfhost a registry?
I know this is selfhosted so most people here are hobbyists, but it’s a ton of work to selfhost in enterprise setting. I’d wager 90%+ of people using image registries are using Docker Hub, GHCR, or AWS ECR.
For your personal use, you don’t need an enterprise setting. It’s just a simple compose file that you run.
You can host a registry in pull through mode, so you still have all the images you use locally, but if it’s not in your registry yet, it pulls it from docker hub or whatever.
The only pain point is that a single registry can’t do both. So if you want to push your own docker images AND have a “cache” of stuff from docker hub, you need to run two registries in two different modes. And then juggle the url’s.
Pretty sure you could run Pulp in pull-through mode and add your local Forgejo/whatever registry as a remote, which would at least give you a unified “pull” URL. Then just use Forgejo actions to handle the actual build/publish for your local images whenever you push to main (or tag a release, or whatever).
Pulp might actually be able to handle both on its own, I haven’t ever tried though.
I hadn’t actually considered that before. What’s your preferred way to do that?
Harbor
I have just this (which ironically won’t work now cause docker hub is down)
services: registry: restart: always image: registry:2 ports: - 5000:5000 dns: - 9.9.9.9 - 1.1.1.1 volumes: - ../files/auth/registry.password:/auth/registry.password - registry-data:/var/lib/registry environment: REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_HEALTH_STORAGEDRIVER_ENABLED: false REGISTRY_HTTP_SECRET: ${REGISTRY_HTTP_SECRET} REGISTRY_AUTH: htpasswd REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.password # REGISTRY_PROXY_REMOTEURL: "https://registry-1.docker.io/" volumes: registry-data:
I don’t even remember how and when I set it up. I think it might be this: https://github.com/distribution/distribution/releases/tag/v2.0.0
Recently somebody has created a frontend, which I bookmarked but didn’t bother to set up: https://github.com/Joxit/docker-registry-ui
Oh god, that just 404s for me
Yeah I ran into this as well. Wondered why it needs a call to auth for public container images in the first place.
How does using Podman help when the registry is down?
mirror.gcr.io is google’s public mirror of dockerhub.
It makes me wish I was selfhosting more services, music & chat in particular. It wasn’t important enough to set up yet
Can recommend Jellyfin, I use it for both music and tv/movies. Not sure on the chat bit, there are so many option it could get a long list
I have Jellyfin, but I haven’t tried it with music. How does it compare to Navidrome?
For chat, I was thinking something super simple for the weird situations like this. Alternatively, Briar if you’re near the person you want to contact
Finamp as a music specialized client is really awesome. Just get the beta version as they are reworking it deeply and the stable one is not really updated (also app password make it easier to use OIDC sso plugin on jellyfin)
I moved from subsonic to jellyfin years ago, cuz subsonic didnt do video very well.
Jellyfin looks to do all the stuff Navidrome does, plus video in the same way
I’d provide a plug for LMS! If you don’t give much of a damn for music video type stuff, it’s pretty solid and exposes more metadata through the Subsonic API than Navidrome does. My use case required Composer tags in addition to the usual smorgasbord. Bonus is that it combines SUPER well with Symfonium and is compatible with Audiomuse AI.
All that said, I would switch over to Jellyfin for music if they upped their music metadata game and made genre exploration a bit easier (assuming you have hundreds of distinct genre tags like I do).
Is this the LMS you’re talking about?
yes, also recommending it
important enough to set up yet
FWIW for music LMS is 1 container command including it in the location in real-only of you music directory with all your files, that’s it. So… if you are used to self-hosting (e.g. already have a reverse proxy and container setup) that’s maybe 1h top.
And I’m having a very good day now :3
Are you an IT contractor or something?
In some way, I am, but mainly I feel my need to only use selfhosteable stuff, and selfhost 90% of those services, confirmed.
Who wants to bet Amazon gave AI full access to their prod config and it screwed it up.
That’s a good theory haha
Or some engineer decide today would be a great day to play with BGP
A bad day for Jeff Bezos is a good day for all of us
I’m not a was costumer. What’s their usual SLA?
that is an understatement 😂
It takes 5-10 reloads to get an page from IMDB lol
OMG, IMDB too
They are an Amazon company, so it makes sense they’d be using AWS.
A fun game to play right now is to try to hit any of your regularly visited sites and see which ones are down. 😂
themoviedb.org unfazed
It’s wild that these cloud providers were seen as a one-way stop to ensure reliability, only to make them a universal single point of failure.
Well companies use not for relibibut to outsource responsibility. Even a medium sized company treated Windows like a subscription for many many years. People have been emailing files to themself since the start of email.
For companies moving everything to msa or aws just was the next step and didn’t change day to operations
People also tend to forget all the compliance issues that can come around hosting content, and using someone with expertise in that can reduce a very large burden. It’s not something that would hit every industry, but it does hit many.
It is still a logical argument, especially for smaller shops. I mean, you can (as self-hosters know) set up automatic backups, failover systems, and all that, but it takes significant time & resources. Redundant internet connectivity? Redundant power delivery? Spare capacity to handle a 10x demand spike? Those are big expenses for small, even mid-sized business. No one really cares if your dentist’s office is offline for a day, even if they have to cancel appointments because they can’t process payments or records.
Meanwhile, theoretically, reliability is such a core function of cloud providers that they should pay for experts’ experts and platinum standard infrastructure. It makes any problem they do have newsworthy.
I mean,it seems silly for orgs as big and internet-centric as Fortnite, Zoom, or forturne-500 bank to outsource their internet, and maybe this will be a lesson for them.
It’s also silly for the orgs to not have geographic redundancy.
No it’s not. It’s very expensive to run and there are a lot of edge cases. It’s much easier to have regional redundancy for a fraction of the cost.
The organizations they were talking about and I was referring to have a global presence
Plus, it’s not significantly more expensive to have a cold standby in a different geographic location in AWS.
yeah, so many things now use AWS in some way. So when AWS has a cold, the internet shivers
universal single point of failure.
If it’s not a region failure, it’s someone pushing untested slop into the devops pipeline and vaping a network config. So very fired.
Apparently it was DNS. It’s always DNS…
A single point of failure you pay them for.
It’s mostly a skill issue for services that go down when USE-1 has issues in AWS - if you actually know your shit, then you don’t get these kinds of issues.
Case in point: Netflix runs on AWS and experienced no issues during this thing.
And yes, it’s scary that so many high-profile companies are this bad at the thing they spend all day doing
I love the “git gud” response. Sacred cashcows?
Netflix did encounter issues. I couldn’t access it yesterday at noon EST. And I wasn’t alone, judging by Downdetector.ca
What’s the general plan of action when a company’s base region shits the bed?
Keep dormant mirrored resources in other regions?
I presumed the draw of us-east-1 was its lower cost, so if any solutions involve spending slightly more money, I’m not surprised high profile companies put all their eggs in one basket.
I presumed the draw of us-east-1 was its lower cost
At no time is pub-cloud cheaper than priv-cloud.
The draw is versatility, as change didn’t require spinning up hardware. No one knew how much the data costs would kill the budget, but now they do.
Yeah, if you’re a major business and don’t have geographic redundancy for your service, you need to rework your BCDR plan.
But… that costs money.
So does an outage, but I get that the C-suite can only think one quarter at a time
Absolutely this. We are based out of one region, but also have a second region as a quick disaster recovery option, and we have people 24/7 who can manage the DR process. We’re not big enough to have live redundancy, but big enough that an hour of downtime would be a big deal.
Case in point: Netflix runs on AWS and experienced no issues during this thing.
But Netflix did encounter issues. For example the account cancel page did not work.
I would say that’s a pretty minor issue that isn’t related to the functioning of the service itself.
It’s probably by design that the only thing that didn’t work was the cancel page
That’s honestly just a tin-foil hat sort of take, that entirely relies on planning for an unprecedented AWS outage specifically to screw over customers.
What I meant by that is that they probably didn’t care if that service has a robust backup solution like authentication or something would.
They zigged when we all zagged.
Decentralisation has always been the answer.
sidekicks in '09. had so many users here affected.
never again.
But if everyone else is down too, you don’t look so bad 🧠
No one ever got fired for buying IBM.
I wouldn’t be so sure about that. The state government of Queensland, Australia just lifted a 12 year ban on IBM getting government contracts after a colossal fuck up.
Such a monstrous clusterfuck, and you’ll be hard pressed to find anyone having been sacked, let alone facing actual charges over the whole debacle.
If anything, I’d say that’s the single best case for buying IBM - if you’re incompetent and/or corrupt, just go with them and even if shit hits the fan, you’ll be OK.
It’s an old joke from back when IBM was the dominant player in IT infrastructure. The idea was that IBM was such a known quantity that even non-technical executives knew what it was and knew that other companies also used IBM equipment. If you decide to buy from a lesser known vendor and something breaks, you might be blamed for going off the beaten track and fired (regardless of where the fault actually lay), whereas if you bought IBM gear and it broke, it was simply considered the cost of doing business, so buying IBM became a CYA tactic for sysadmins even if it went against their better technical judgement. AWS is the modern IBM.
if you bought IBM gear and it broke, it was simply considered the cost of doing business,
IBM produced Canadian Phoenix Pay system has entered the chat with a record 0 firings.
AWS is the modern IBM.
That’s basically why we use it at work. I hate it, but that’s how things are.
Yes but now it is nobody ever got fired for buying Cisco.
One of our client support people told an angry client to open a Jira with urgent priority and we’d get right on it.
… the client support person knew full well that Jira was down too : D
At least, I think they knew. Either way, not shit we could do about it for that particular region until AWS fixed things.
This gif is audible
This kind of shit will only increase as more of these companies believe they can vibe-code their way out of paying software devs what they are worth.
according to that page the issue stemmed from an underlying system responsible for health checks in load balancing servers.
how the hell do you fuck up a health check config that bad? that’s like messing up smartd.conf and taking your system offline somehow
Well, you see, the mistake you are making is believing a single thing the stupid AWS status board says. It is always fucking lying, sometimes in new and creative ways.
I mean if your OS was “smart” as not to send IO to devices that indicate critical failure (e.g. by marking them read-only in the array?), and then thinks all devices have failed critically, wouldn’t this happen in that kind of system as well…
If your health check is broken, then you might not notice that a service is down and you’ll fail to deploy a replacement. Or the opposite, and you end up constantly replacing it, creating a “flapping” service.