What actually happens to your files after 7 days
sto.care's product page says “files auto-delete after 7 days.” That sentence hides a surprising amount of engineering. The 7 isn't enforced by a cron job watching the clock; it's a TTL attribute on a DynamoDB row plus an S3 Object Lifecycle policy plus the way AWS's underlying object store handles deletes. None of those three pieces is instantaneous, and only one of them actually destroys data.
This piece walks the full lifecycle, from the moment your browser fires the upload PUT to the moment the bytes are unrecoverable, with primary sources at every step. We'll cover what “deleted” means in S3, why DynamoDB TTL is a sweep and not a precision timer, and where cryptographic erasure fits into the picture. The answer to “is my file gone after 7 days?” is “yes, but the word ‘gone’ is doing more work than people realise.”
The lifecycle, end to end
Here is what physically happens when you drop a file into the upload zone, share the link, and walk away.
Stage 1: upload
Your browser hits a Lambda that returns a presigned PUT URL valid for an hour. Your file goes directly from the browser to S3 over TLS, landing as a single object encrypted at rest with SSE-S3. Server-side encryption with S3-managed keys means AWS rotates and protects the keys; we never see them. That detail matters for the deletion section later.
Once the PUT succeeds, a confirm Lambda writes a row to our DynamoDB table. The row carries the upload ID, the sender's email, the S3 key, and a ttl attribute set to the current epoch plus 604,800 seconds (the UPLOAD_TTL_SECONDS constant in the codebase, which is 7 days expressed as seconds).
Stage 2: distribution
SES sends two emails: a confirmation email to you with a revoke link, and (if you've added recipients) a download notification to them. The revoke link contains a short token that is itself a row in DynamoDB with its own short TTL. This is the path that gives you the “take it back any time” capability that fixed-expiry services don't have.
Stage 3: download
Each recipient hits a download endpoint. The Lambda checks the upload record exists, generates a fresh presigned GET URL (one hour), and redirects. The S3 GET happens directly between the recipient and AWS; we don't proxy bytes. A small download counter on the row gets atomically incremented for our own observability.
Stage 4: the 7-day mark
Two parallel mechanisms are racing each other to the same outcome. The DynamoDB row's TTL has fired and is queued for sweep. The S3 bucket has a lifecycle policy expiring objects older than 7 days, and that lifecycle scanner runs on its own schedule. Either one would be sufficient to make the link stop working, because the API requires both pieces (a metadata row to validate the link AND an object to serve). In practice the metadata typically goes first.
Stage 5: revoke (optional, can fire any time)
If you click the revoke link in your confirmation email, a Lambda synchronously calls S3 DeleteObject and then deletes the DynamoDB metadata row. From your perspective the link breaks within seconds. From AWS's perspective the object is now in the same state as a normal lifecycle expiry, which brings us to the part of the article that actually matters.
What “deleted” really means in cloud storage
Most consumer-facing “your data is deleted” copy treats delete as a single atomic event. Cloud storage doesn't work that way. There are at least three different things people mean when they say a file is gone, and they have different security properties.
Tombstoning (the default)
When you call S3 DeleteObject on a non-versioned bucket, the object becomes immediately inaccessible: GETs return 404, the key disappears from listings, the API behaves as if the object never existed. AWS explicitly notes that on a non-versioned bucket the operation “permanently removes” the object, per the deletion docs. But “permanently removes” here is API-level. The bytes on the underlying physical media may still exist for some period until garbage collection sweeps them, and the data sits across many physical replicas in different facilities for durability.
This is the same model Apache Cassandra documents as tombstoning: a delete is itself a small write that masks a previous write, and the actual reclamation happens on a compaction pass. Most distributed stores do something equivalent because chasing down every replica synchronously would cost too much.
Lifecycle expiration
S3 lifecycle policies are tombstoning on a schedule. You configure a rule (“expire objects 7 days after creation”) and S3 runs a periodic sweep that issues DeleteObject calls on matching keys. AWS's lifecycle management documentation explicitly says expiration is asynchronous: “there might be a delay between the expiration date and the date at which Amazon S3 removes an object.” Billing stops at the expiration date, but the physical removal happens on AWS's schedule.
That asynchrony is fine for our use case (the link stops working because the metadata is gone), but it's the kind of detail omitted from most marketing pages. If a service tells you “files are deleted at midnight on day 7,” the truthful version is “the link stops working at day 7; the bytes are queued for removal at AWS's convenience after that.”
Cryptographic erasure
Cryptographic erasure is the more interesting primitive. The idea is simple: if data is encrypted with a key, and you destroy the key, the ciphertext is unrecoverable even if the bytes still exist somewhere on disk. AWS documents this approach explicitly in their KMS cryptographic details: “Cryptographic erasure is the technique of destroying the cryptographic keys used to encrypt data, rendering the data unreadable.”
The standards bodies recognise this too. NIST's data sanitization guideline, SP 800-88 Rev. 1, lists cryptographic erase as an acceptable sanitization technique for media that have been encrypted from the start. For cloud storage, crypto erasure is structurally cleaner than overwrite: you don't have to chase replicas, you don't have to issue physical-disk commands, you just destroy a small piece of key material.
sto.care doesn't use customer-managed KMS keys for per-upload crypto erasure today. We use SSE-S3, which is a single AWS-managed key per bucket. That means our deletion model is firmly in the tombstone-plus-lifecycle category, not the per-object crypto-erase category. Be skeptical of any consumer service that claims cryptographic erasure without documenting per-object key handling; without it, the term is decorative.
Secure overwrite (mostly a red herring)
The 1990s consumer software market sold a lot of utilities that promised “DoD 5220.22-M” or Gutmann-method secure overwrites: write zeroes, write ones, write random bytes, thirty-five passes. None of that translates to cloud storage. AWS doesn't expose physical sectors. Modern SSDs use wear levelling and TRIM, so logically overwriting a sector doesn't map to overwriting the same physical cell. NIST 800-88 specifically warns that overwrite is not a reliable sanitization technique for SSDs and recommends crypto erase or destroy for them.
The summary: in cloud object storage, “deleted” means the API stops serving it and a sweep eventually reclaims the physical bytes. Anything stronger than that requires either per-object key destruction (crypto erase) or physical-media destruction. Consumer SaaS copy that dresses up deletion with adjectives is, in practice, the same tombstone-plus-sweep that everyone else uses.
| Mechanism | Object accessible? | Storage reclaimed? | Bytes recoverable? |
|---|---|---|---|
| S3 DeleteObject | No (404) | Eventually (sweep) | Not by you; theoretically by AWS until purge |
| S3 lifecycle expiration | No (after sweep) | Eventually (sweep) | Same as above |
| Cryptographic erase (per-object key) | No (key gone) | Eventually | No, ciphertext unreadable |
| Overwrite (DoD, Gutmann) | N/A in cloud | N/A in cloud | Not meaningful for SSD-backed cloud storage |
The DynamoDB TTL piece
DynamoDB's TTL feature is what we use to schedule expiry on the metadata side. You set a numeric attribute (we use ttl) to an epoch second, and AWS removes the item some time after that epoch passes. The exact wording in the DynamoDB TTL documentation is worth reading carefully:
Items are typically deleted within a few days of their expiration time, on a best effort basis.
Earlier AWS documentation gave more specific deletion windows; current S3 and DynamoDB docs avoid hard numbers. Either way, TTL is not a precision deletion timer. It's a scheduling hint that tells DynamoDB “this row is eligible for sweep after this timestamp.” The sweep runs on AWS's schedule, and the deletion is a normal background process, not a synchronous event at the millisecond the timestamp passes.
For our purposes this is fine: the application reads the row, sees it exists or doesn't, and serves accordingly. Because we also check the row on every download attempt, a row that's ttl-expired but not yet swept can still serve the file. So the practical user-facing behaviour is closer to “the link works for at least 7 days, and may briefly continue working until the sweep catches up.”
If you want strict 7-day behaviour you can layer an application-level check (“reject if now >created_at + 7d”) on top of the TTL. We did add that check on the read path, so the link stops resolving at exactly the 7-day mark even if the row hasn't been swept yet. The underlying record is the part that takes the AWS sweep window to physically clear.
What we keep, what we don't
A 7-day file expiry is a useful headline number, but the sharper question is “what data of mine, in total, is still on sto.care's infrastructure on day 8?” Here is the full retention map.
The file itself
Gone via S3 lifecycle 7 days after upload, or via S3 DeleteObject within seconds if you revoke. Either path ends with the object inaccessible at the API and queued for physical purge.
The DynamoDB metadata record
Stored under UPLOAD#<id> as a single item. Carries the upload ID, sender email, S3 key, expiry timestamp, and download counter. Removed by TTL sweep within a short window of the 7-day mark. The application's read path also enforces the 7-day cutoff, so the row stops resolving even before sweep.
Sender email
Lives on the metadata record. Goes when the metadata goes. We don't copy it to a separate marketing table; we don't use it to send anything other than the confirmation, the recipient notification, and (if you click it) the revoke flow. There's no analytics pipeline that vacuums emails into a CRM.
IP address (rate limiting only)
Stored under RATELIMIT#<ip> with a 2-hour TTL (the RATE_LIMIT_TTL_SECONDS constant). The row carries only the IP and a counter. It exists so we can enforce 10 uploads per IP per hour. After 2 hours it's gone via the same TTL sweep.
Server logs
Lambda emits structured logs to CloudWatch (request IDs, status codes, error stacks). We don't log file content or sender emails into operational logs. CloudWatch retention follows AWS account defaults, which can be configured per log group; we haven't custom-set a short retention here, which means the AWS-default window applies until we tighten it. Logs don't carry your file or your email; they carry request shape, used for debugging.
What we don't have
No user accounts. No marketing list. No analytics on file content. No bucket versioning (so no “old version” lurking after delete). No backup tier we forgot to mention. The narrowest possible state once your 7 days are up: a few CloudWatch lines and, for a brief sweep window, the metadata row about to vanish.
What other tools tell you (and don't)
We read the disclosure pages for several common file-sharing services to see how they describe their deletion model. The pattern across the industry is that the headline is loud and the mechanism is quiet.
WeTransfer
WeTransfer's help articles say transfers expire after 3 days on the free tier and that files are then deleted, but they don't publish a mechanism doc. Their privacy policy notes file deletion follows the transfer expiry but is silent on whether deletion is immediate, swept, crypto-erased, or tombstone-plus-sweep. The recipient-side data retention is similarly vague.
Google Drive
Files in Google Drive don't auto-expire; they live in your Drive until you actively delete them. Google's docs explain that deleted Drive items go to Trash and stay there for 30 days before automatic permanent deletion (or you can empty Trash sooner). After permanent deletion Google notes it “may take some time to fully delete” from their systems. Workspace admins get longer recovery windows.
Dropbox
Dropbox's deletion model is closer to Drive's: items stay in a deleted-files area for 30 days on Basic and Plus accounts (longer for Business), then are permanently deleted. Dropbox is more explicit than most about the retention window but still doesn't document the underlying mechanism.
SwissTransfer
Run by Infomaniak, SwissTransfer offers expiry windows up to 30 days. Their public docs frame this as “files are deleted” without describing how. Infomaniak markets a privacy positioning, which makes the mechanism gap more notable.
sto.care
The mechanism is documented above. The constants are in the codebase. The TLS-plus-SSE-S3 boundary is named directly. If the real answer is “link breaks at day 7, bytes purged within a sweep window after,” saying that is more useful than “your data is fully deleted instantly.” Use that as a yardstick when reading any other service's deletion copy.
How to read any tool's “we delete after X days” claim
Take this as a checklist for evaluating any service that promises time-bound deletion. The questions below are what to look for in a privacy policy or help center; if a service can't answer them, you can probably substitute “tombstoning, eventual purge” as a default and not be far off.
- Is the mechanism named? Cryptographic erasure, object lifecycle expiration, application-level soft-delete, manual admin process. If not named, assume tombstoning plus sweep.
- Is metadata retention separated from file retention? Many services delete the file but keep a database row referencing it for analytics or fraud detection. A complete answer to “files deleted in 7 days” says what happens to the metadata too.
- Are server logs scoped? Operational logs are legitimate, but they shouldn't carry file content or identifiable user data. The retention window for logs should be either documented or short.
- Is there a transparency report? Cloudflare, Apple, Google publish them. A small file-sharing service likely doesn't, but absence of one shouldn't be treated as presence of evidence either way.
- Is the model end-to-end encrypted? If the service can read your file, deletion semantics matter more. If only you can read it, deletion is somewhat redundant; the worst-case recoverable artifact is encrypted ciphertext.
What this means in practice
The 7-day-then-gone model is the right fit for most casual file transfers: send your editor the final cut, send your client the invoice scan, send a relative the family photos. The link works for a week, then breaks. If you change your mind on day 3, click revoke; the link breaks within seconds. Sensible defaults, no account on either side.
Where the model isn't the right answer:
- Sub-second revoke needs. If “the link must stop working in under a second of me clicking revoke” is a real requirement (think regulated discovery), you want a service where the API check is synchronous and proxied. Our revoke is fast (low seconds, not milliseconds), and we don't make stronger claims.
- Zero-knowledge / E2EE. If “AWS could not read this file even with a court order” is a hard requirement, Tresorit, Mega, or Proton Drive are the right tier. They give up some of sto.care's simplicity (no signup, browser-only, instant share link) in exchange for not holding decryption keys.
- Regulated industries with strict chain of custody. Healthcare, legal, certain financial workflows often require per-event audit logs, signed receipts, and configurable retention. A consumer-grade transfer service is the wrong layer.
- Long-term collaborative storage. If you want a shared folder for ongoing edits, you want Google Drive or Dropbox. A 7-day expiry is hostile to that workflow on purpose.
For everything in between, sto.care's lifecycle is what we've documented here: TLS in transit, SSE-S3 at rest, TTL plus lifecycle for expiry, immediate revoke on demand, no accounts, no marketing list. If you're curious about the broader category, we have a separate piece on encrypted file transfer that names the boundaries of TLS, AES-256, and what “encrypted” actually means on a file-sharing site.
The headline still works. Files auto-delete after 7 days. The engineering underneath is a few more sentences. We figured the sentences were worth writing down.
Frequently asked questions
Does sto.care delete files immediately at the 7-day mark?
Not to the second. The link stops working at the 7-day mark because the application enforces the cutoff on the read path. The DynamoDB row gets swept within a short window (typically tens of minutes to a couple of days) by AWS's TTL process. The S3 object is removed by lifecycle expiration on a similar sweep schedule.
What happens to my email address after the file expires?
Your email lives on the upload metadata row. When the row goes, the email goes with it. We don't keep a separate marketing list, and there's no “archive” tier where the email persists past upload expiry.
Can I get my file back after expiry or revoke?
No. We don't keep backups, and the bucket isn't versioned. Once expiry or revoke fires, the API returns 404 and the underlying object is queued for physical removal. There's no recovery path on our side.
Is sto.care end-to-end encrypted?
No. We use TLS in transit and SSE-S3 at rest. AWS holds the keys. For a true E2EE model where only you and the recipient can read the file, look at Tresorit, Mega, or Proton Drive. We'd rather name the boundary clearly than overclaim.
What does cryptographic erasure mean, and do you use it?
Cryptographic erasure destroys the encryption key the data depends on, making the ciphertext unreadable even if the bytes still exist somewhere. NIST 800-88 recognises it as a sanitization technique. sto.care uses SSE-S3 (one bucket-scoped AWS-managed key), not per-object customer-managed KMS keys, so our deletion model is tombstone-plus-sweep, not per-object crypto erasure.
How long do you keep my IP after the file is gone?
Rate limit records carry the IP for 2 hours (theRATE_LIMIT_TTL_SECONDS constant in the codebase), then TTL out. Our application code doesn't write IPs to a separate persistent store. CloudWatch operational logs follow AWS retention defaults and don't carry user-facing data.