How to fix 413 request Entity too large Nginx Ubuntu error?
In the high-stakes world of web development and server administration, few errors are as simultaneously straightforward and frustrating as the HTTP 413 “Request Entity Too Large” error.
If you’re managing a website or application on an Nginx server running Ubuntu, encountering this wall when users try to upload files or submit large forms can bring critical functionality to a grinding halt.
This comprehensive guide will demystify the error and provide you with proven, step-by-step solutions to resolve it for good.
Understanding Fix 413 Request Entity Too Large Error: The Digital “No Entry” Sign
At its core, the 413 Request Entity Too Large error is a server’s way of saying, “What you’re trying to send me is bigger than I’m allowed to accept.” It’s a protective measure. Nginx, by default, imposes limits on the size of client request bodies to prevent abuse, denial-of-service attacks, and to manage server resources efficiently.
When this error triggers, it typically means that the value of the client_max_body_size directive in your Nginx configuration is set lower than the size of the file or data a user is attempting to send. The default is often a mere 1 megabyte (1MB), which is insufficient for modern web applications involving image uploads, video content, document submissions, or large data exports.
The error manifests clearly in the Nginx error logs (typically /var/log/nginx/error.log) with entries like:*[error] 1234#1234: *1 client intended to send too large body
Your mission, should you choose to accept it, is to adjust this limit appropriately and ensure the configuration is applied correctly across the necessary contexts.
How to Fix 413 Request Entity Too Large Nginx Ubuntu: Step-by-Step
Before editing any configuration files, it’s a cardinal rule to create a backup. A simple typo can take your site offline. Use sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup for the main file, and similar logic for others.
Solution 1: The Primary Fix – Adjusting client_max_body_size in Nginx
This directive is the primary control knob for the request body size limit.
1(a). Open your Nginx configuration file
The main configuration file is usually /etc/nginx/nginx.conf. However, the most effective and organized place to set this is often within the specific server block (virtual host) for your website. These files are commonly found in /etc/nginx/sites-available/.
sudo nano /etc/nginx/sites-available/your-domain.conf
Or, if you have a default file:
sudo nano /etc/nginx/sites-available/default
1(b). Locate the server { ... } block for your domain or application, add or modify the client_max_body_size directive.
You can place it within the server block to apply it to the entire virtual host, or inside a specific location block (e.g., location /upload/) for more granular control.
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
# The key line: Increase the limit to, for example, 100MB
client_max_body_size 100M;
root /var/www/yourdomain.com/html;
index index.html index.htm index.nginx-debian.html;
...
}
Units Explained: You can specify the size in bytes (no suffix), kilobytes (k or K), megabytes (m or M), or gigabytes (g or G). 100M is 100 megabytes, a common starting point for many applications.
Save and close the file. In nano, press CTRL+X, then Y, then Enter.
Test the Nginx configuration for syntax errors: This crucial step verifies you haven’t introduced any mistakes.
sudo nginx -t
You should see: nginx: configuration file /etc/nginx/nginx.conf test is successful
Reload Nginx to apply the changes: If the test passes, reload Nginx. Unlike a restart, this applies changes with minimal downtime.
sudo systemctl reload nginx
# Or use the traditional signal: sudo nginx -s reload
Why This Works: You are directly increasing the maximum allowable payload size that Nginx will process for the defined context (server or location), allowing larger uploads to pass through to your application (like PHP-FPM, Node.js, or a Python backend).
Solution 2: Handling Upstream Proxies (PHP-FPM, FastCGI, etc.)
If your Nginx passes requests to a backend processor like PHP via PHP-FPM, you might still encounter issues after changing Nginx’s setting. This is because the backend may have its own limits.
For PHP-FPM (common with WordPress, Laravel, etc.):
You must also adjust the upload_max_filesize and post_max_size directives in your PHP configuration.
Find the active PHP configuration file:
php --ini | grep "Loaded Configuration File"
Locate and modify the two key directives. Use CTRL+W in nano to search.
; Maximum allowed size for uploaded files.
upload_max_filesize = 100M
; Maximum size of POST data that PHP will accept.
post_max_size = 100M
Pro Tip: It’s good practice to set post_max_size to be slightly larger than upload_max_filesize to account for form overhead.
Restart PHP-FPM to apply the changes.
sudo systemctl restart php8.2-fpm
Why This Works: Nginx handles the initial receipt of the request, but PHP-FPM handles the parsing of the POST data and file uploads. Both layers need their limits raised in harmony.
Solution 3: Adjusting Timeouts for Large Uploads
Very large files (like videos) may take a long time to upload. You might need to adjust Nginx’s timeouts to prevent the connection from closing during the upload.
Add these directives within your http, server, or location block:
server {
client_max_body_size 500M;
client_body_timeout 300s; # Time to read the client body
proxy_read_timeout 300s; # Time to wait for a response from backend
fastcgi_read_timeout 300s; # Specific for PHP-FPM
...
}
Why This Works: These directives ensure that neither Nginx nor the upstream process gives up on the long-running request required for a massive upload.
Solution 4: Configuration in the http Block (Global Setting)
If you find yourself needing a universal limit across all sites on the server, you can set the directive in the main nginx.conf file within the http { ... } block.
- Edit the main config:
sudo nano /etc/nginx/nginx.conf - Inside the
httpblock, add:client_max_body_size 20M;(for a sensible global default). - Test and reload Nginx as in Solution 1.
Warning: A very high global limit could increase your server’s vulnerability to resource exhaustion attacks. Prefer setting limits per server or location block where possible.
Troubleshooting & Verifying the Fix
- Clear Browser Cache: Old error messages can be cached. Use
Ctrl+F5or test in an incognito window. - Check the Correct Configuration File: Ensure you edited the Nginx site configuration that is actually enabled. Files in
sites-availablemust be linked tosites-enabled. You can check withls -la /etc/nginx/sites-enabled/. - Inspect the Logs: The Nginx error log (
/var/log/nginx/error.log) is your best friend. Monitor it in real-time while testing an upload:sudo tail -f /var/log/nginx/error.log. - Check All Relevant
locationBlocks: If you have a specificlocation /wp-admin/orlocation ~ \.php$block, ensure a more restrictiveclient_max_body_sizewithin it isn’t overriding your server block setting. - Restart vs. Reload: When changing directives outside of the main
httpblock,reloadis usually sufficient. If changes don’t take effect, a full restart can help:sudo systemctl restart nginx. - You can also check NGINX modules for the reference.
Best Practices & Security Considerations
- Set Context-Specific Limits: Don’t just set a 1GB limit globally. Apply a large limit (e.g.,
500M) only to the specific upload endpoint (location /upload/) and a smaller, safer default elsewhere. - Implement Client-Side Validation: Always validate file size on the user’s browser before upload. This provides immediate feedback but is not a substitute for server-side checks.
- Implement Server-Side Application Logic: Your application (e.g., your PHP or Python code) should also check file sizes after upload and enforce business rules.
- Monitor Resources: Large uploads consume bandwidth, disk I/O, and memory. Ensure your server has adequate resources and monitor for abuse.
Conclusion
Resolving the “413 Request Entity Too Large” error on Nginx and Ubuntu is a fundamental rite of passage for sysadmins and developers. It involves a clear understanding of the multi-layered request handling process: from the Nginx web server itself, through to any upstream application processors like PHP-FPM.
By methodically adjusting the client_max_body_size directive in the correct Nginx configuration context, and ensuring complementary settings in your backend (like upload_max_filesize in PHP), you can transform your server from a restrictive gatekeeper into a capable conduit for the data your application needs. Remember the mantra: test your configuration (nginx -t), reload the service, and verify through logs. Now, go forth and enable those large, seamless uploads.
FAQs: 413 Request Entity Too Large Error Nginx Ubuntu
What exactly causes the “413 Request Entity Too Large” error?
This error occurs when the size of data being uploaded (like a file or form submission) exceeds the maximum limit configured in your Nginx server. Nginx has a built-in protection mechanism that rejects overly large requests to prevent server overload and abuse.
Is this error specific to Ubuntu?
A: No, the error is specific to Nginx configuration. However, the file paths and commands to fix it differ between operating systems. This guide focuses on Ubuntu/Debian systems where Nginx configuration files are typically organized in /etc/nginx/sites-available/ and /etc/nginx/sites-enabled/.
How can I check my current Nginx upload limit?
You can check by examining your Nginx configuration files:
grep -r "client_max_body_size" /etc/nginx/
This command searches for the directive across all Nginx configuration files.
Where should I add the client_max_body_size directive?
You can add it at three levels:
- http block (
/etc/nginx/nginx.conf) – Applies to all sites on the server - server block (in your site configuration file) – Applies to a specific website
- location block – Applies only to specific URLs (like
/uploads/)
For most cases, adding it to your site’s server block in /etc/nginx/sites-available/your-site.conf is recommended.
I changed the configuration but still get the error. Why?
Common reasons include:
- Not reloading Nginx:
sudo systemctl reload nginx - Having a lower limit in a more specific location block that overrides your setting
- PHP-FPM limits still being too low (if using PHP)
- Browser cache: Try clearing cache or using incognito mode
- Testing with a file that’s still too large
What’s the difference between reloading and restarting Nginx?
sudo systemctl reload nginx gracefully reloads configuration without dropping connections. sudo systemctl restart nginx completely stops and starts the service, which is more disruptive but sometimes necessary for certain changes.
How do I know which Nginx configuration file to edit?
Check which sites are enabled:
ls -la /etc/nginx/sites-enabled/
Or check your domain’s configuration:
sudo nginx -T | grep "server_name yourdomain.com"
Why do I need to adjust PHP settings if I already changed Nginx?
Nginx handles receiving the request, but PHP processes the actual file upload. If Nginx allows a 100MB upload but PHP is still set to 2MB, PHP will reject it after Nginx has already accepted it.
How do I find my PHP version?
Use either:
php --version
Or for PHP-FPM specifically:
systemctl list-units | grep php-fpm
Should upload_max_filesize and post_max_size be the same value?
Best practice is to set post_max_size slightly higher (like 1-2MB more) than upload_max_filesize. This accounts for form metadata and headers that accompany file uploads.
I’m using WordPress and still having issues after fixing Nginx and PHP. Why?
WordPress has its own file size limits in the admin dashboard (Settings → Media). Additionally, some WordPress plugins or themes may impose their own limits. Also check for .htaccess overrides (though these don’t affect Nginx directly).
What happens if I set client_max_body_size to 0?
Setting it to 0 disables checking of client request body size entirely. This is generally not recommended for security reasons, as it could make your server vulnerable to denial-of-service attacks.
Can I set different limits for different file types?
Not directly through Nginx alone, but you can create different location blocks for different upload endpoints and set limits accordingly. Your application should also validate file types server-side.
What’s the maximum value I can set for client_max_body_size?
Technically, you can set it as high as your server’s memory allows, but extremely high values (like several gigabytes) may cause performance issues or timeouts. For very large files, consider implementing chunked uploads or direct-to-cloud storage solutions.
Do I need to adjust client_body_buffer_size as well?
Usually not. client_body_buffer_size controls the buffer for reading the request body. It defaults to 8k or 16k, which is fine for most cases. If you’re handling extremely large uploads, you might increase it to match memory page sizes for efficiency.
Is increasing the upload limit a security risk?
Increasing it to reasonable levels for your application’s needs is fine. However, setting excessively high limits globally can make your server vulnerable to resource exhaustion attacks. Always set appropriate limits and consider using location-specific restrictions.
How can I prevent abuse of large upload limits?
Implement multiple layers:
- Set reasonable limits per endpoint
- Add rate limiting in Nginx
- Implement authentication for upload endpoints
- Use server-side validation for file types and sizes
- Regularly monitor server resources and logs
Will increasing upload limits affect my server’s performance?
Larger uploads consume more bandwidth, disk I/O, and temporary storage. Monitor your server resources, especially if you expect many concurrent large uploads. Consider increasing client_body_timeout for very large files to prevent timeouts.
Should I adjust my firewall or network settings?
If you’re increasing limits to handle very large files (hundreds of MBs or GBs), ensure your firewall isn’t terminating long-lived connections. Also, check that any load balancers or CDNs in front of your server have appropriate timeout settings.
How do I check Nginx error logs for 413 Request Entity Too Large error?
Use:
sudo tail -f /var/log/nginx/error.log
Or search specifically for 413 errors:
sudo grep "413" /var/log/nginx/error.log
I’m getting “connection reset” instead of 413 Request Entity Too Large error. Is this related?
Possibly. If the upload is timing out or the connection is being reset during upload, you may need to increase timeout values (client_body_timeout, proxy_read_timeout, or fastcgi_read_timeout).
My changes work for HTTP but not HTTPS. Why?
If you have separate server blocks for HTTP and HTTPS, you need to apply the changes to both configurations. Check if you have duplicate server blocks on different ports (80 and 443).
How can I test if my changes worked without actually uploading files?
You can use curl to test:
curl -X POST -H "Content-Type: application/json" -d '{"test":"data"}' http://yourdomain.com/upload
Or create a test script that reports back the maximum upload size allowed.
Do these changes affect GET requests or only POST/PUT?
The client_max_body_size directive affects any request with a body, which primarily means POST, PUT, and PATCH requests. GET requests typically don’t have request bodies, so they aren’t affected.
I’m using Docker. Where do I make these changes?
In Docker, you’ll need to modify the Nginx configuration inside your container or rebuild your image with updated configuration files. The principles are the same, but the file paths depend on your Docker setup.

How to Fix 406 Not Acceptable Error