The quiet beginning most people never see
A ransomware attack almost never begins with a dramatic ransom note. It begins with something small and ordinary: a login that looks legitimate, a remote connection that blends into routine traffic, a file opened during a busy morning, or a tool that administrators use every day. That’s what makes ransomware so dangerous. The early steps don’t feel like an emergency; they feel like “normal.” By the time the attack becomes obvious, the attacker has often already done the hardest work: getting in, staying in, and learning where the organization keeps its most valuable systems. This article breaks down a realistic ransomware scenario as it often unfolds in modern environments. The timing varies by organization, attacker capability, and defenses, but the phases are surprisingly consistent. Think of it like a heist with a clock: reconnaissance, access, preparation, and then the loud moment that forces a decision. Understanding that tempo changes how you defend. It also changes how you respond, because the difference between a limited incident and a full shutdown can come down to a few early choices made under uncertainty.
A: In poorly segmented environments, it can escalate from one foothold to widespread disruption within hours.
A: Contain and coordinate: isolate affected systems, protect identity, preserve logs, and centralize decisions.
A: Backups reduce leverage, so attackers often try to disable or corrupt recovery options first.
A: Not always—service failures, account lockouts, or unusual admin activity can appear earlier.
A: Only after confirming the initial access path is closed and restored systems will be monitored and validated.
A: Rebuild and validation: restoring safely, confirming integrity, and preventing reinfection is time-intensive.
A: Harden identity, segment networks, improve logging, and test restores and incident roles regularly.
A: Yes—many operations stage and exfiltrate data to add pressure beyond encryption.
A: Delaying containment while waiting for certainty, allowing the attacker to escalate privileges.
A: A full review: root cause, control gaps, monitoring upgrades, and rehearsed improvements to response.
Minute 0: The initial foothold
“Minute zero” is the first moment the attacker gains access to something that matters. That could be a compromised employee account, a VPN credential bought on a marketplace, a stolen session cookie, an exposed remote service, or a compromised vendor relationship. Sometimes it’s a single endpoint; sometimes it’s a cloud account; sometimes it’s a foothold in a small remote office that shares identity with the rest of the company.
What defenders see at this stage is often subtle: a login from an unusual location, a new device registration, a successful authentication at an odd hour, or an employee reporting a weird prompt. In many environments, this looks like the daily background noise of IT. That’s why attackers love it. If they can blend in here, the rest of the timeline gets faster.
Minutes 1–10: Establishing persistence without triggering alarms
Once inside, attackers try to make sure they can come back. If their only access is a single compromised password, they may get locked out quickly. So they look for persistence: adding a new account, planting a scheduled task, creating an OAuth app or API token, registering a remote management agent, or setting up a backdoor method that survives a reboot. The specific technique depends on the environment, but the intent is the same: don’t rely on one door.
Defenders who catch ransomware early often catch it here, not at encryption. This is where small indicators can matter: a new admin account created without a ticket, a suspicious scheduled task, a sudden spike in authentication attempts, or unusual access to identity management settings. The problem is that organizations don’t always have clean visibility into their identity layer, especially across cloud services. When identity is the control plane, losing it turns the rest of the environment into a map for the attacker.
Minutes 10–30: Reconnaissance—finding the “keys” and the crown jewels
Now the attacker starts learning. They enumerate users and groups, look for privileged roles, discover which servers are important, and identify how systems connect. They’re not just hunting for data; they’re hunting for leverage. In a modern ransomware operation, leverage is operational disruption. The attacker wants to find what would stop the organization from functioning: file servers, domain controllers, virtualization platforms, backups, identity services, remote management tools, and the systems that run core business processes.
This phase can be noisy if the attacker is reckless, but many are careful. They may spread their activity out, use built-in commands, and run their discovery from machines that look like administrators. That’s a key detail: when attackers gain admin-level access, their behavior begins to resemble legitimate IT work. That similarity is one of the most frustrating aspects of ransomware defense. The same tools that keep systems running can be repurposed to take them down.
Minutes 30–60: Privilege escalation—turning access into control
If the attacker does not already have high privileges, this is when they attempt to gain them. They may steal credentials from memory, capture tokens, abuse misconfigurations, exploit unpatched systems, or find a forgotten admin account that never rotated its password. They may also target helpdesk workflows, which can be surprisingly powerful if identity verification is weak.
For defenders, this is a turning point. A ransomware incident with only a few compromised endpoints can be contained with targeted isolation. A ransomware incident that reaches privileged identity can become an enterprise-wide event. If the attacker gains domain admin privileges or equivalent cloud admin roles, the organization’s security tools can become less reliable. Monitoring can be disabled, policies can be altered, and access can be granted to places that were previously protected.
Minutes 60–120: Lateral movement—moving like an internal admin
Once privileges rise, attackers expand across the environment. They use remote administration to jump from one system to the next, seeking better access and better staging points. They may target virtualization management, file shares, remote desktop services, and management servers that can push software broadly. If the organization uses centralized management, the attacker sees a megaphone.
Defenders may notice at this stage that multiple machines are contacting each other in unusual ways, that remote management tools are being used outside standard patterns, or that an account is authenticating across many hosts rapidly. But again, in many companies, administrators do authenticate across many hosts. What matters is whether the pattern matches normal maintenance windows, normal endpoints, and normal change management. When ransomware response teams say “know your baseline,” this is the moment they mean.
Hour 2–4: Targeting backups and recovery paths
Modern ransomware groups don’t simply encrypt and hope. They actively try to sabotage recovery. Backups are the enemy of ransomware leverage, so they are frequently targeted before encryption begins. Attackers may attempt to delete snapshots, disable backup jobs, encrypt backup repositories, or compromise the accounts used to manage backups. They may also attempt to corrupt recovery by altering configurations, disabling security tooling, or planting persistence that will trigger again after restoration.
This is where the incident becomes more than a technical puzzle; it becomes a race for options. If backups are healthy, the organization has choices. If backups are damaged, choices narrow and pressure rises. In a minute-by-minute lens, the question becomes: are defenders moving faster than the attacker toward the systems that decide the outcome?
Hour 4–6: Data staging and “double extortion” pressure
Many ransomware operations now include data theft. The attacker quietly collects valuable files, compresses them, and stages them for exfiltration. The goal is not only to encrypt systems but also to threaten exposure. That changes negotiations, legal considerations, and communications. It also changes how defenders prioritize. Encryption creates an operational crisis. Data theft creates a reputational and compliance crisis that can persist long after systems are restored.
From a defender’s perspective, exfiltration is difficult to confirm quickly. Traffic may look like normal cloud uploads, remote work sync, or routine data transfers. But the attacker often needs to move a lot of data efficiently, and that can create patterns: large transfers from unusual hosts, repeated access to sensitive shares, or compression and archiving behavior on servers that don’t normally do that work.
Hour 6–8: The pre-encryption “set up”—quiet changes with big consequences
Before encryption, attackers often prepare the environment to make the blast as effective as possible. They may disable endpoint protection, shut down services that could interfere, stop databases cleanly to ensure files encrypt properly, and ensure the ransomware payload can run broadly. They may test encryption on a small subset of systems to confirm the payload works and that it won’t crash too early.
This phase can look like maintenance activity: services stopping, scripts running, management tools deploying packages. The difference is context. If changes are happening without tickets, outside maintenance windows, and across unexpected systems, it’s time to assume compromise and act quickly. The organizations that recover fastest are often the ones that choose containment early, even if they don’t have perfect certainty yet.
Hour 8–10: Encryption begins—first wave on critical systems
Encryption rarely hits every device at once. Attackers often start with servers that create maximum disruption: file servers, virtualization hosts, application servers, and shared storage. The goal is to break the organization’s ability to function and communicate. In some incidents, endpoints get hit later, after the organization is already struggling. That sequencing is deliberate. It forces decision-makers to operate in a fog, with limited systems and incomplete information.
At this moment, the attack becomes visible. Users report files that won’t open. Applications fail. Systems reboot into errors. Security teams see alerts spiking. Sometimes the first obvious sign is not encryption but the sudden failure of multiple services at once, because the attacker stopped critical processes as a precursor. The “minute-by-minute” feeling becomes very real: every phone call is urgent, every dashboard is flashing, and every decision has tradeoffs.
Hour 10–12: The ransom note and the scramble for coordination
The ransom note is not just a demand; it’s a signal that the attacker believes they have achieved leverage. It may appear on encrypted machines, on shared drives, or in directories designed to be noticed. Sometimes the attacker also contacts the organization via email. The message is designed to create urgency, fear, and a sense that time is running out.
Inside the organization, this is where response succeeds or collapses based on coordination. If roles are unclear, people take random actions. Someone reboots servers “to see if it fixes it.” Someone disables a switch without telling anyone. Someone starts restoring backups into an environment that might still be compromised. Those actions can slow recovery, destroy evidence, and increase damage. Effective teams shift quickly into incident command: designate leadership, centralize communication, document actions, and make changes deliberately.
Hour 12–24: Containment—stopping spread without breaking recovery
Containment is both technical and organizational. Technically, teams isolate affected segments, disable compromised accounts, block suspicious traffic, and remove remote access paths that are being abused. Organizationally, teams decide what can be taken offline, what must keep running, and which systems are safe enough to touch.
This is where the “minute-by-minute” breakdown becomes less about one perfect move and more about controlled triage. Some systems will be sacrificed to protect others. Some services will be paused to reduce spread. Teams also begin building an evidence-driven timeline: when access began, which accounts were used, which systems were touched, and whether data theft likely occurred. That timeline guides recovery because it helps answer the most important question: if we restore, will we restore into a still-compromised environment?
Day 2–3: Recovery—restoring systems without resurrecting the attacker
Recovery is often slower than people expect because it is not just “restore from backup.” It’s rebuild, validate, monitor, and gradually reopen. Teams start with identity and core infrastructure, because those systems govern everything else. They verify that restored systems are clean, that privileged access is controlled, and that monitoring is functional. They prioritize business-critical applications based on impact and dependencies, not based on which team shouts the loudest.
A common mistake is restoring too broadly too quickly. If the attacker still has credentials, persistence, or access, restoration becomes a loop. Another common mistake is restoring without fixing the initial access path. If the attack started with stolen credentials and the organization doesn’t reset and re-secure identity, the attacker can return. In a minute-by-minute mindset, recovery is the phase where patience is actually speed. The fastest route back to stability is the one that prevents a second collapse.
Day 3–7: The aftershock—investigation, communications, and trust rebuilding
Even after systems return, the incident continues. Stakeholders want answers. Leadership wants to know what happened and whether it could happen again. Customers want reassurance. Regulators may require notifications depending on the nature of any data exposure. Internally, teams need to recover emotionally and operationally. Ransomware incidents are exhausting, and fatigue can create new mistakes.
This period is also where security maturity can jump. The incident exposes what’s missing: weak identity controls, flat networks, poor logging, untested backups, unclear incident roles, or underfunded monitoring. Organizations that learn well don’t just patch one vulnerability; they redesign how they detect, contain, and recover. They treat the incident as a diagnostic that revealed systemic weaknesses.
Why minutes matter: the small choices that change outcomes
Ransomware is often described as a catastrophe, but the catastrophe is built from small decisions. Did someone approve a risky exception? Did MFA coverage have gaps? Were privileged accounts overused? Were backups tested under pressure? Was there a clear incident commander? Each of those factors can compress the attacker’s timeline and slow the defender’s timeline.
The good news is that minute-by-minute advantage can be trained. Visibility can be improved. Identity can be hardened. Segmentation can slow lateral movement. Backups can be made resilient and tested. Response roles can be rehearsed so that when the clock starts, teams don’t waste the first hours arguing about what to do. The goal is not to predict every attack. The goal is to reduce how fast an attacker can turn a foothold into leverage.
A practical way to think about preparedness
If you want a simple mental model, think in three clocks: the attacker’s clock, the defender’s clock, and the business clock. The attacker’s clock measures how quickly they can escalate privileges and reach systems that matter. The defender’s clock measures how quickly you can detect, decide, and contain. The business clock measures how long operations can tolerate disruption before pressure forces risky decisions.
Preparedness is about buying time on all three clocks. Faster detection buys minutes. Segmentation buys hours. Strong identity buys days. Tested recovery buys confidence. And confidence is a powerful control during ransomware, because panic is the attacker’s most reliable ally.
