One of the most common — and most dangerous — security oversights in modern web development is accidentally exposing sensitive credentials in your production build. For React applications, the dist or build folder is the compiled output that gets deployed to the public internet. If your AWS access keys, API secrets, or other sensitive environment variables end up bundled into that output, anyone with a browser's developer tools can extract them in seconds.
This is not a theoretical risk. It happened to one of our clients — and the consequences were severe. Here's what went wrong, how we recovered, and the architectural solution we implemented to ensure it never happens again.
The Problem: AWS Keys Hardcoded in a React Build
A startup client approached ITPenthouse after discovering unauthorized activity in their AWS account. Their S3 buckets had been accessed, files were deleted or overwritten, and their monthly AWS bill had spiked dramatically due to unauthorized resource usage.
After an initial security audit, we traced the root cause to their React frontend. During the build process, the application bundled AWS access keys directly into the JavaScript output. These keys were embedded as environment variables that were intended to allow the frontend to upload files directly to an S3 bucket.
The build pipeline used REACT_APP_ prefixed variables — which, by design in Create React App and similar toolchains, are injected into the client-side bundle at compile time. The developers had placed REACT_APP_AWS_ACCESS_KEY_ID and REACT_APP_AWS_SECRET_ACCESS_KEY in their .env file, not realizing these values would become fully visible in the minified JavaScript served to every user.
Why This Is So Dangerous
- Complete visibility: Anyone can open the browser's DevTools, inspect network requests, or search through JavaScript source maps to find embedded strings. Minification does not encrypt or hide values — it only shortens variable names.
- Automated scraping: Bots routinely scan public websites and open-source repositories for patterns matching AWS keys. Once discovered, credentials are exploited within minutes.
- Broad blast radius: AWS access keys often have permissions far exceeding what the frontend needs. In this case, the compromised keys had read/write access to multiple S3 buckets, including ones containing customer data and internal backups.
The Recovery: How S3 Versioning Saved the Day
The attacker had deleted and overwritten critical files across several S3 buckets. Under normal circumstances, this would have meant permanent data loss. However, our client had one saving grace: S3 Object Versioning was enabled on their primary buckets.
S3 Versioning is a feature that preserves, retrieves, and restores every version of every object stored in a bucket. When versioning is enabled, a delete operation does not permanently remove an object — it inserts a "delete marker," and the previous versions remain fully recoverable.
Our team immediately took the following recovery steps:
- Revoked the compromised AWS credentials and rotated all access keys across the organization.
- Audited CloudTrail logs to determine exactly which resources had been accessed, modified, or exfiltrated.
- Restored all deleted and overwritten files by reverting to previous object versions in S3. Because versioning was enabled, every file had a complete history, allowing us to roll back to the last known good state.
- Reviewed IAM policies and applied the principle of least privilege to all remaining credentials.
- Scanned the entire codebase and build output to confirm no other secrets were exposed.
The recovery took less than 48 hours, and no customer data was permanently lost. Without versioning, the outcome would have been catastrophic.
Lesson: Enable Versioning Before You Need It
If you store anything of value in S3 — documents, media, user uploads, backups — enable versioning today. It costs marginally more in storage, but it provides an insurance policy that is invaluable when things go wrong, whether from a security breach, an accidental deletion by a team member, or a faulty deployment script.
The Fix: S3 Presigned URLs (One-Time Links)
The root architectural flaw was clear: the frontend should never have direct access to AWS credentials. The question was how to allow users to upload and download files from S3 without embedding secrets in the client.
The answer is S3 presigned URLs — sometimes referred to as one-time links. Here is how they work:
How Presigned URLs Work
- The client requests an upload or download by calling your backend API (e.g., POST /api/get-upload-url).
- The backend generates a presigned URL using the AWS SDK with the server-side credentials that are never exposed to the frontend. This URL grants temporary, scoped permission to perform a single operation (upload or download) on a specific S3 object.
- The client uses the presigned URL to interact directly with S3. The URL expires after a defined period — typically 5 to 15 minutes — and cannot be reused for unauthorized purposes.
- No AWS credentials ever reach the browser. The backend holds the keys; the client only receives a short-lived, purpose-specific URL.
What We Implemented
For our client's application, we designed the following architecture:
- A lightweight backend endpoint (Node.js on AWS Lambda behind API Gateway) that authenticates the user, validates the request, and generates a presigned S3 URL with a 10-minute expiry.
- Scoped IAM roles: The Lambda function's execution role has permission to generate presigned URLs for only the specific bucket and prefix required. It cannot access other buckets or perform administrative actions.
- Frontend refactor: All references to AWS credentials were removed from the React codebase and the CI/CD environment configuration. The frontend now calls the backend API and uses the returned presigned URL for direct S3 upload or download.
- Automated secret scanning: We integrated tools like git-secrets and truffleHog into the CI pipeline to prevent any credentials from being committed or bundled in future builds.
Best Practices to Prevent Credential Exposure in Frontend Builds
Based on this engagement and our broader experience building secure applications, here are the rules every development team should follow:
- Never place secrets in client-side environment variables. Any variable prefixed with REACT_APP_, NEXT_PUBLIC_, or VITE_ will be embedded in your build output. Treat these as public.
- Use a backend or serverless function as a proxy for any operation that requires credentials — file uploads, third-party API calls, payment processing, and more.
- Implement presigned URLs for all direct-to-S3 interactions from the client.
- Enable S3 Versioning on all buckets containing important data.
- Enable MFA Delete on critical buckets to prevent even authenticated users from permanently deleting object versions without multi-factor authentication.
- Audit your build artifacts regularly. Download your production build, unminify the JavaScript, and search for strings that look like credentials, tokens, or internal URLs.
- Use automated scanning in CI/CD. Tools like git-secrets, truffleHog, and AWS's own credential scanning can catch leaks before they reach production.
- Rotate credentials regularly and use short-lived session tokens (via AWS STS) wherever possible.
Conclusion
Security breaches rarely result from sophisticated, novel attacks. More often, they stem from simple misconfigurations — a secret placed in the wrong file, a permission policy that is too broad, a build pipeline that bundles what it shouldn't. The React dist folder is public by definition. Anything that goes into it is available to the world.
Our client was fortunate that S3 versioning allowed full data recovery. But fortune is not a strategy. The right approach is to architect your application so that sensitive credentials never leave the server, to use mechanisms like presigned URLs that grant minimal, time-limited access, and to build automated safeguards that catch mistakes before they become incidents.