Internet Suffers ‘Temporary Amnesia’ After Amazon Web Services Experiences Outage
Amazon says AWS cloud service is back to normal.
A major Amazon Web Services (AWS) outage on Monday disrupted large swaths of the internet, affecting thousands of high-traffic websites, mobile apps, and critical online services.
Impacted sites and apps included Reddit, Snapchat, Venmo, Fortnite, Roblox, Coinbase, Disney+, and numerous banking and government platforms.
Amazon reported “increased error rates and latencies” starting around 3 a.m. ET that involved its facility in Northern Virginia.
Users reported trouble with popular websites and apps including Duolingo and the online games Roblox and Fortnite. Financial service companies like Coinbase, Robinhood and Venmo also reported disruptions, as did the companies that operate the chatbots Perplexity and ChatGPT. Amazon said its main website was affected. United Airlines, Canva, Reddit and Flickr also acknowledged problems with their websites. The Associated Press, NPR and The New York Times’ Games also said they had issues.
the AWS outage is affecting Ring doorbells. Sky News says “some users can’t see outside their house, definitely scary”
just look outside your window 😅 pic.twitter.com/mylipDe7S4
— Tom Warren (@tomwarren) October 20, 2025
The outage began around 3:00 am Eastern Time, originating in AWS’s US-EAST-1 data center region in Northern Virginia, one of its largest and most critical hubs. Amazon’s official health dashboard cited increased error rates and latencies across multiple AWS services due to a Domain Name System (DNS) resolution issue linked to DynamoDB, a core AWS database system.
DNS resolution issues occur when the process of converting a domain name (like “example.com”) into its corresponding IP address fails or becomes delayed. This prevented client systems from correctly referencing and retrieving stored data, described by one expert as the internet suffering “temporary amnesia”.
The company said it “identified the trigger of the event as DNS resolution issues for the regional DynamoDB service endpoints.” It ran into more problems as it tried to solve the outage, but it was eventually able to fix everything. “By 3:01 PM, all AWS services returned to normal operations,” it said.
At about 4:30PM ET on October 20, things seemed to be returning back to normal. Apps like Venmo and Lyft, which were either slow to respond or completely nonresponsive before, were appearing to behave smoothly.
Every AWS outage… pic.twitter.com/XcSKeP8Ar2
— Arvid Kahl (@arvidkahl) October 20, 2025
There were over 6.5 million reports of connectivity issues from around the world.
Downdetector says it received over 6.5 million global reports of connectivity problems, including 1.4 million+ from the US and 800K+ in the UK.
“The lesson here is resilience,” says Luke Kehoe, an industry analyst at Ookla. “Many organizations still concentrate critical workloads in a single cloud region. Distributing critical apps and data across multiple regions and availability zones can materially reduce the blast radius of future incidents.”
Those incidents, he says, are probably “becoming slightly more frequent as companies are encouraged to completely rely on cloud services, [but] this kind of outage, where a foundational internet service brings down a large swathe of online services, only happens a handful of times in a year.”
Amazon was also impacted by the outage.
The outage also brought down critical tools inside Amazon. Warehouse and delivery employees, along with drivers for Amazon’s Flex service, reported on Reddit that internal systems were offline at many sites. Some warehouse workers were instructed to stand by in break rooms and loading areas during their shift, while they couldn’t load Amazon’s Anytime Pay app, which lets employees access a portion of their paycheck immediately.
Seller Central, the hub used by Amazon’s third-party sellers to manage their businesses, was also knocked offline by the outage.
As of now, everything seems to be back to normal for internet connectivity. However, much of the content remains…questionable.
Donations tax deductible
to the full extent allowed by law.






Comments
We’ve taken the internet that was originally designed to route around network problems and put all our eggs in too few baskets. We’ve allowed Content Distribution Networks such as AWS and Cloudflare to become too large and the internet suffers for it every time there’s a glitch in their systems. But here we are and there we go. Too late to do anything about it.
I asked my offshore team what the fallback plan was for if the Optical fiber to India was
deliberately cut by China. Their response was, “We wait until it gets fixed.”
There is no “fall-back” plan if any of the fiber cables or otherwise are damaged/severed deliberately or by accident. It’s just gone until it’s repaired. If it can be repaired. Though a business that needs the connection could possibly contract out their fiber cable requirements to another fiber cable operator. There seems to be plenty of them existing at this point in history.
Submarine Cable Map
Exactly. Even using two different ISPs to have a backup doesn’t work because they share the same local backbone. While TCP/IP provides the ability to route around down servers, resiliency assumes there are multiple independent physical paths and this isn’t a given as we localize services.
This alleged outage didn’t affect anything that I use online. I’m sure there’s a lesson there somewhere…
At the end of the day, the cloud is just someone else’s computer.
I just wonder what is unsaid. Worst case, our systems are vulnerable and our enemies are probing.
What steps are we taking to up-armor this infrastructure?
Literally billions spent in zero trust architecture compliance and monitoring systems like Splunk. Trouble is that it’s not protection but detection and that technique is always in arrears and does not possess the required consistency and persistency to be secure. Recent paper from DefCon demonstrates it’s a house built on sand and can never be made secure.
Using localized cloud storage for multiple applications minimizes the investment needed to attack the system and maximizes the return that a successful attack gleans. The counter argument to this risk is that turning data storage to industry security experts puts the responsibility on those most qualified to prevent data loss and system denial. This counter argument never materialized due to the issues in the first paragraph. It’s ironic that the service agreements with cloud providers state that the security of your data is your responsibility and doesn’t reside with the experts.
The only way to secure data systems is to not use equipment designed under Open System Interconnect or to air gap the systems that do.
It’s just Sky Net. No worries.
And just the other day they fired 40% of their IT team and replaced them with AI.
I’m sure this was totally unrelated.
at some point, maybe stop blaming AWS and start blaming the IT folks that park literally everything in us-east-1
In fairness let’s extend some blame to the execs who engage in short-term thinking and refuse to budget for investments in redundancy to maintain operations.
IMO we have these sorts of issues for the same reason we have potholes…. paving the streets isn’t ‘sexy’, shutting down lanes of traffic necessary to do it upsets folks while it goes on and no one really appreciates the importance of it until a crisis hits. Once a crises arrives all anyone wants is for it to end as quickly as possible without any thought to making the investment needed to prevent and/or mitigate future impacts. IOW everyone just wants ‘their crap’ to work NOW and tomorrow can take care of itself…which is exactly the lack of vision which got us here.
I suspect that “right wing” services fared slightly better, as they have already been forced to think about how to maintain their services in the event of discretionary cancellation from progressive service providers such as Amazon.
The lesson of Parler was not lost on the smart ones.
Very good observation.
“Sky News says “some users can’t see outside their house, definitely scary”
just look outside your window”
Doesn’t help when your Ring cams are 3/4 mile away, monitoring areas on the other side of buildings and hills, or rental properties across town, or your failing mother halfway across the country.
Doesn’t help when your fancy “Ring Cameras” all upload video to the same AWS bucket cloud hosted in a single AWS zone (US-East-1) with no fallback and that AWS zone goes tots up and you find you can’t access your video(s) because your DNS requests are continually failing to resolve to the proper database endpoint and there’s nothing you can do about it.
It’s all about The Cloud™ these days but nobody thinks about what they’ll do when The Cloud™️drifts away and you can’t find your data.
My youngest, who has been working for Ring ever since it was a start up, tells me that the “employee experience”” infrastructure for their troubleshooters was all based in the same unreachable bucket, making the remediation work extra spicy.
The outage took our employee schedule tracker (Sling) offline, but that was a minor inconvenience, not a crisis.
Worse, the outage devastated vast sections of the US government’s IT infrastructure, making large numbers of people unable to access… Wait a second. Schumer did that already.