How to Deploy Laravel on a VPS (DigitalOcean & Hostinger) — Complete Step-by-Step CLI Guide

Muhammad Zeeshan
Published:
Last updated:
How to Deploy Laravel on a VPS (DigitalOcean & Hostinger) — Complete Step-by-Step CLI Guide

I wrote this as the kind of guide I wish I had when I was first setting up production servers. We'll cover not just the how but the why behind every security decision — so you actually understand your server, not just copy-paste commands blindly.

This guide applies to both DigitalOcean Droplets and Hostinger VPS running Ubuntu 22.04 LTS. The commands are nearly identical across both platforms.


1. Prerequisites

Before you start, make sure you have the following ready:

  • A fresh Ubuntu 22.04 LTS VPS on DigitalOcean or Hostinger (minimum 1GB RAM recommended for Laravel)
  • Your domain name pointed to your server's IP address (A record configured in your DNS)
  • Your Laravel project in a Git repository (GitHub, GitLab, or Bitbucket)
  • SSH access to your server (DigitalOcean and Hostinger both give you root credentials on first setup)
  • A local terminal — Mac/Linux Terminal or Windows WSL/PuTTY

A note on Ubuntu version: This guide uses Ubuntu 22.04 LTS (Jammy Jellyfish). The commands also work on Ubuntu 20.04. If you're on CentOS or Debian, the package manager commands will differ slightly.


2. Initial Server Access

When your VPS is created, you'll get a root password (Hostinger) or you'll have set up an SSH key (DigitalOcean). SSH into your server for the first time:

ssh root@YOUR_SERVER_IP

If you're using an SSH key with DigitalOcean:

ssh -i ~/.ssh/your_key root@YOUR_SERVER_IP

Once you're in, the first thing to do is update all system packages. Never skip this step on a fresh server:

apt update && apt upgrade -y

This updates the package list and upgrades all installed packages to their latest versions. Security patches are included here — running a server with outdated packages is like leaving your front door unlocked.


3. Create a Non-Root Sudo User

Working as root all the time is dangerous. A single typo — like rm -rf / — can wipe your entire server with no warning or confirmation. Create a separate user with sudo privileges and disable direct root login.

# Create a new user (replace 'zeeshan' with your username)
adduser zeeshan

# Add the user to the sudo group
usermod -aG sudo zeeshan

Now copy your SSH keys to the new user so you can log in as them:

# While still logged in as root
rsync --archive --chown=zeeshan:zeeshan ~/.ssh /home/zeeshan

Open a new terminal window and test logging in as your new user before closing the root session:

ssh zeeshan@YOUR_SERVER_IP

Confirm sudo works:

sudo whoami
# Should output: root

Now disable root login via SSH. Edit the SSH config:

sudo nano /etc/ssh/sshd_config

Find and change these lines:

PermitRootLogin no
PasswordAuthentication no

Setting PasswordAuthentication no means only SSH key holders can log in — no password brute-forcing possible. Save the file (Ctrl+X, then Y, then Enter) and restart SSH:

sudo systemctl restart sshd

4. Change the Default SSH Port — And Why It Matters

By default, SSH listens on port 22. This is universally known. The moment a server goes online with port 22 open, automated bots from all over the internet start hammering it with login attempts — trying common usernames and passwords thousands of times per hour. This is not theoretical. Check your auth logs on any fresh server after 10 minutes:

sudo cat /var/log/auth.log | grep "Failed password"

You'll see hundreds of failed attempts from IPs all over the world. Changing the SSH port to a non-standard number won't make you invincible, but it eliminates the vast majority of automated attacks because bots scan port 22 specifically. It's a simple change that immediately reduces your attack surface significantly.

How to Change the SSH Port

Edit the SSH daemon config:

sudo nano /etc/ssh/sshd_config

Find the line:

#Port 22

Uncomment it and change it to a port number between 1024 and 65535. Pick something non-obvious. Avoid common alternatives like 2222 (bots scan those too):

Port 4827

Before restarting SSH, add the new port to your firewall (UFW) so you don't lock yourself out:

sudo ufw allow 4827/tcp

Now restart SSH:

sudo systemctl restart sshd

Open a new terminal window and test connecting on the new port before closing your current session:

ssh -p 4827 zeeshan@YOUR_SERVER_IP

Once confirmed working, you can close the old session. From now on, always connect with -p 4827 (or whatever port you chose). To make this permanent on your local machine, add it to your SSH config:

# On your LOCAL machine, edit ~/.ssh/config
Host myserver
    HostName YOUR_SERVER_IP
    User zeeshan
    Port 4827
    IdentityFile ~/.ssh/your_key

Now you can simply type ssh myserver to connect.

Further reading: SSH Academy — Changing the SSH Port


5. Disable Ping on Your Server IP — And Why You Should

When someone pings your server IP and gets a response, they've confirmed two things: the IP is active, and a server is running there. This is the first step in most network reconnaissance. Bots and attackers use ping sweeps to discover live servers across IP ranges. Dropping ICMP echo requests makes your server effectively invisible to these automated scans.

This won't stop a determined attacker who already knows your IP, but it does reduce your exposure to opportunistic scanners significantly — and on a server that doesn't need to respond to diagnostic pings from the public internet, there's no reason to leave it enabled.

How to Block Ping with UFW

Edit the UFW before.rules file:

sudo nano /etc/ufw/before.rules

Find this section near the top:

# ok icmp codes for INPUT
-A ufw-before-input -p icmp --icmp-type destination-unreachable -j ACCEPT
-A ufw-before-input -p icmp --icmp-type time-exceeded -j ACCEPT
-A ufw-before-input -p icmp --icmp-type parameter-problem -j ACCEPT
-A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT

Change the last line from ACCEPT to DROP:

-A ufw-before-input -p icmp --icmp-type echo-request -j DROP

Save and reload UFW:

sudo ufw reload

Test it from your local machine:

ping YOUR_SERVER_IP
# Should now show: Request timeout

Note: Keep destination-unreachable and time-exceeded as ACCEPT — these are needed for proper TCP/IP routing and MTU discovery. Only drop the echo-request.


6. Install & Configure fail2ban — Your First Line of Defence

Even with a non-standard SSH port and key-only authentication, it's good practice to run fail2ban. fail2ban monitors your log files in real-time and automatically bans IP addresses that show malicious behaviour — like repeated failed SSH login attempts, or repeated 404 errors on your web server.

Think of it as an automated bouncer. After a configurable number of failed attempts within a time window, fail2ban adds an iptables rule to block that IP for a set period. This protects not just SSH but also your web application, email, and any other service you configure it to watch.

Install fail2ban

sudo apt install fail2ban -y

Configure fail2ban

Never edit /etc/fail2ban/jail.conf directly — it gets overwritten on updates. Instead, create a local override file:

sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo nano /etc/fail2ban/jail.local

Find the [DEFAULT] section and set these values:

[DEFAULT]
# Ban an IP for 1 hour (in seconds)
bantime  = 3600

# Look for failures within a 10-minute window
findtime  = 600

# Ban after 5 failed attempts
maxretry = 5

# Your own IP to never ban (replace with your home/office IP)
ignoreip = 127.0.0.1/8 ::1 YOUR_HOME_IP

Now find the [sshd] section and enable it with your custom port:

[sshd]
enabled = true
port    = 4827
logpath = %(sshd_log)s
backend = %(sshd_backend)s

If you want fail2ban to also protect your Nginx web server from brute-force attempts:

[nginx-http-auth]
enabled = true

[nginx-limit-req]
enabled = true
logpath = /var/log/nginx/error.log

Start and enable fail2ban:

sudo systemctl start fail2ban
sudo systemctl enable fail2ban

Check its status and see active jails:

sudo fail2ban-client status
sudo fail2ban-client status sshd

To manually unban an IP if you accidentally lock yourself out:

sudo fail2ban-client set sshd unbanip THE_IP_ADDRESS

Further reading: fail2ban Official Manual


7. Configure UFW Firewall

UFW (Uncomplicated Firewall) is the standard firewall tool on Ubuntu. The rule is simple: deny everything by default, then only allow what you specifically need.

# Set default rules
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow your custom SSH port (use whatever port you chose in Step 4)
sudo ufw allow 4827/tcp

# Allow HTTP and HTTPS
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

# Enable the firewall
sudo ufw enable

# Check status
sudo ufw status verbose

You should see only ports 4827, 80, and 443 open. Nothing else. If your app needs MySQL from an external service, you can open port 3306 restricted to a specific IP:

sudo ufw allow from TRUSTED_IP to any port 3306

Never open port 3306 to the world — that's a common and serious mistake.


8. Install the LEMP Stack (Nginx + PHP + MySQL)

LEMP stands for Linux, Nginx, MySQL (or MariaDB), and PHP. This is the standard stack for Laravel VPS deployment.

Install Nginx

sudo apt install nginx -y
sudo systemctl start nginx
sudo systemctl enable nginx

Install PHP and Required Extensions

Laravel 10+ requires PHP 8.1 or higher. We'll install PHP 8.2 with all extensions Laravel needs:

# Add the Ondrej PHP PPA for latest PHP versions
sudo apt install software-properties-common -y
sudo add-apt-repository ppa:ondrej/php -y
sudo apt update
# Install PHP 8.2 and Laravel's required extensions
sudo apt install php8.2 php8.2-fpm php8.2-mysql php8.2-mbstring php8.2-xml \
php8.2-bcmath php8.2-curl php8.2-zip php8.2-gd php8.2-intl php8.2-redis \
php8.2-tokenizer php8.2-fileinfo -y

Verify PHP is installed:

php -v

Enable OPcache for significantly faster PHP execution (this alone can reduce response times by 50-70%):

sudo nano /etc/php/8.2/fpm/php.ini

Find and set these OPcache values:

opcache.enable=1
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.revalidate_freq=2
opcache.fast_shutdown=1

Install MySQL

sudo apt install mysql-server -y
sudo mysql_secure_installation

The secure installation script will ask you to set a root password, remove anonymous users, disable remote root login, and remove the test database. Answer yes to all of these.

Create a database and user for your Laravel app:

sudo mysql -u root -p
-- Inside MySQL shell
CREATE DATABASE your_app_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'your_app_user'@'localhost' IDENTIFIED BY 'StrongPassword123!';
GRANT ALL PRIVILEGES ON your_app_db.* TO 'your_app_user'@'localhost';
FLUSH PRIVILEGES;
EXIT;

Install Composer

curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer
composer --version

Install Git

sudo apt install git -y

9. Configure Nginx Virtual Host for Laravel

This is one of the most critical steps in a Laravel server deployment. A correct Nginx server block does several things: it points the document root to Laravel's public/ directory (not the project root — exposing your project root is a serious security risk), handles PHP via PHP-FPM, and routes all requests through Laravel's index.php front controller.

Create a new Nginx server block for your domain:

sudo nano /etc/nginx/sites-available/yourdomain.com

Paste this complete configuration:

server {
    listen 80;
    listen [::]:80;

    server_name yourdomain.com www.yourdomain.com;

    # IMPORTANT: Document root points to Laravel's public/ directory
    # Never point this to your project root
    root /var/www/yourdomain.com/public;

    index index.php index.html;

    # Security headers — protect against common web attacks
    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options "nosniff";
    add_header X-XSS-Protection "1; mode=block";
    add_header Referrer-Policy "strict-origin-when-cross-origin";

    # Hide Nginx version from response headers
    server_tokens off;

    # Maximum upload size — match your php.ini setting
    client_max_body_size 64M;

    # Laravel's front controller pattern
    # All requests that don't match a real file go to index.php
    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    # Block access to hidden files (like .env, .git)
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    # PHP-FPM configuration
    # Using Unix socket is faster than TCP (127.0.0.1:9000)
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        include fastcgi_params;

        # Pass real IP to PHP (useful for logging and rate limiting)
        fastcgi_param REMOTE_ADDR $remote_addr;
    }

    # Cache static assets aggressively
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|svg)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Don't log favicon or robots.txt requests
    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_log  /var/log/nginx/yourdomain.com_error.log;
    access_log /var/log/nginx/yourdomain.com_access.log;
}

Why These Configuration Choices Matter

  • root /var/www/.../public — Pointing to public/ means your .env file, application code, and vendor/ directory are never directly accessible from the web. This is fundamental Laravel security.
  • Security headersX-Frame-Options prevents clickjacking attacks. X-Content-Type-Options stops MIME sniffing. These cost you nothing but protect your users.
  • Unix socket vs TCPunix:/var/run/php/php8.2-fpm.sock is faster than 127.0.0.1:9000 because it skips the TCP stack entirely. Use socket when Nginx and PHP-FPM are on the same server (which they almost always are).
  • location ~ /\. — This blocks access to any hidden file or directory. Without this, someone could potentially access yourdomain.com/.env directly.

Enable the site by creating a symlink and test the configuration:

# Enable the site
sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/

# Test Nginx config for syntax errors — always do this before reloading
sudo nginx -t

# If the test passes, reload Nginx
sudo systemctl reload nginx

Always run sudo nginx -t before reloading. A config error with a running reload can take your server down.


10. Alternative: Apache Virtual Host for Laravel

If your hosting environment uses Apache (common on cPanel and some Hostinger plans), here's how to configure a virtual host for Laravel. The key difference from Nginx is that Apache uses .htaccess files for URL rewriting, which Laravel already includes in its public/ directory.

First, enable the required Apache modules:

sudo a2enmod rewrite
sudo a2enmod headers
sudo a2enmod ssl
sudo systemctl restart apache2

Install PHP and its modules for Apache:

sudo apt install libapache2-mod-php8.2 -y

Create the virtual host configuration:

sudo nano /etc/apache2/sites-available/yourdomain.com.conf
<VirtualHost *:80>
    ServerName yourdomain.com
    ServerAlias www.yourdomain.com

    # Document root points to Laravel's public/ directory
    DocumentRoot /var/www/yourdomain.com/public

    # Security: hide Apache version
    ServerTokens Prod
    ServerSignature Off

    # Security headers
    Header always set X-Frame-Options "SAMEORIGIN"
    Header always set X-Content-Type-Options "nosniff"
    Header always set X-XSS-Protection "1; mode=block"
    Header always set Referrer-Policy "strict-origin-when-cross-origin"

    <Directory /var/www/yourdomain.com/public>
        # AllowOverride All is required for Laravel's .htaccess to work
        # Without this, URL rewriting breaks and you get 404 errors
        AllowOverride All
        Require all granted

        Options -Indexes -MultiViews
    </Directory>

    # Block access to sensitive files
    <FilesMatch "^\.env|composer\.(json|lock)$">
        Require all denied
    </FilesMatch>

    ErrorLog ${APACHE_LOG_DIR}/yourdomain.com_error.log
    CustomLog ${APACHE_LOG_DIR}/yourdomain.com_access.log combined
</VirtualHost>

Why AllowOverride All is Critical for Laravel on Apache

Laravel's public/.htaccess file handles URL rewriting — it redirects all requests to index.php so Laravel's router can handle them. Without AllowOverride All, Apache ignores the .htaccess file entirely, and every route except the homepage returns a 404. This is the single most common Laravel/Apache misconfiguration.

Enable the site:

sudo a2ensite yourdomain.com.conf

# Disable the default site if you haven't already
sudo a2dissite 000-default.conf

# Test Apache config
sudo apache2ctl configtest

# Reload Apache
sudo systemctl reload apache2

11. Deploy Your Laravel Application

Now let's get your actual application on the server. Create the web directory and clone your repository:

# Create the directory for your app
sudo mkdir -p /var/www/yourdomain.com

# Give your user ownership
sudo chown -R $USER:$USER /var/www/yourdomain.com

# Clone your repository
cd /var/www
git clone https://github.com/yourusername/your-repo.git yourdomain.com

cd yourdomain.com

Install Composer Dependencies

# --no-dev skips development dependencies (testing tools, debugbars, etc.)
# --optimize-autoloader generates a faster class autoloader for production
composer install --optimize-autoloader --no-dev

Set Up Environment File

cp .env.example .env
nano .env

Configure these critical values in your .env:

APP_NAME="Your App Name"
APP_ENV=production
APP_KEY=                  # Leave blank — generate below
APP_DEBUG=false           # CRITICAL: always false in production
APP_URL=https://yourdomain.com

LOG_CHANNEL=stack
LOG_LEVEL=error           # Only log errors in production, not debug info

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=your_app_db
DB_USERNAME=your_app_user
DB_PASSWORD=StrongPassword123!

CACHE_DRIVER=file         # Change to redis if you have Redis installed
QUEUE_CONNECTION=database # Or redis for better performance
SESSION_DRIVER=file

Generate the application key:

php artisan key:generate

Set Correct File Permissions

This is where many deployments break. The web server (running as www-data on Ubuntu) needs write access to storage/ and bootstrap/cache/, but nothing else should be writable by the web server:

# Set ownership — your user owns the files, www-data is the group
sudo chown -R $USER:www-data /var/www/yourdomain.com

# Directories need execute permission to be traversable
sudo find /var/www/yourdomain.com -type d -exec chmod 755 {} \;

# Files should be readable but not executable
sudo find /var/www/yourdomain.com -type f -exec chmod 644 {} \;

# Storage and cache must be writable by www-data
sudo chmod -R 775 /var/www/yourdomain.com/storage
sudo chmod -R 775 /var/www/yourdomain.com/bootstrap/cache

Run Database Migrations

php artisan migrate --force

The --force flag is required in production because Laravel asks for confirmation before running migrations on a production environment. Only use this when you're sure about your migration.

Seed the Database (if needed)

php artisan db:seed --force

Laravel Performance Optimizations

Run all of these on every deployment. They cache your routes, config, and views so Laravel doesn't have to re-parse them on every request:

# Cache configuration — reads .env once, caches the result
php artisan config:cache

# Cache routes — pre-compiles your routes/web.php and routes/api.php
php artisan route:cache

# Cache views — pre-compiles Blade templates
php artisan view:cache

# Generate optimized autoloader
composer dump-autoload --optimize

Important: After running config:cache, your .env file is no longer read directly. Always re-run php artisan config:cache after changing any .env value, or clear the cache first with php artisan config:clear.

Create the storage symlink so uploaded files are publicly accessible:

php artisan storage:link

12. Install SSL Certificate with Let's Encrypt

There is no reason to run a production application without HTTPS in 2024. Let's Encrypt provides free SSL certificates with automatic renewal. Google also penalises non-HTTPS sites in search rankings.

# Install Certbot and the Nginx plugin
sudo apt install certbot python3-certbot-nginx -y

# Obtain and automatically configure SSL for your domain
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

Certbot will ask for your email address (for renewal reminders) and whether to redirect HTTP to HTTPS. Choose option 2 — Redirect. This automatically updates your Nginx config to handle HTTPS and redirect all HTTP traffic.

For Apache, use the Apache plugin instead:

sudo apt install python3-certbot-apache -y
sudo certbot --apache -d yourdomain.com -d www.yourdomain.com

Verify Auto-Renewal

Let's Encrypt certificates expire every 90 days. Certbot installs a systemd timer that renews them automatically. Test it:

sudo certbot renew --dry-run

If the dry run succeeds, your renewal is properly configured.

Update Laravel APP_URL

Now that HTTPS is active, make sure your .env reflects it:

APP_URL=https://yourdomain.com

And if your app is behind a proxy or load balancer (common with DigitalOcean Load Balancers), add this to your App\Http\Middleware\TrustProxies middleware or use Laravel's built-in trusted proxies configuration:

# In config/trustedproxy.php or TrustProxies middleware
protected $proxies = '*';

Without this, request()->secure() returns false even over HTTPS, which can cause mixed content warnings.


13. Configure Laravel Queue Workers & Scheduler

If your Laravel app sends emails, processes images, or does any background work, you're using queues. Queue workers are long-running processes that need to survive server reboots and restart automatically if they crash. Supervisor is the standard tool for this.

Install Supervisor

sudo apt install supervisor -y

Configure a Queue Worker

sudo nano /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/yourdomain.com/artisan queue:work --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/yourdomain.com/storage/logs/worker.log
stopwaitsecs=3600

Key settings explained:

  • numprocs=2 — Runs 2 worker processes. Increase for high-volume queues.
  • --max-time=3600 — Worker exits cleanly after 1 hour, preventing memory bloat on long-running processes. Supervisor restarts it immediately.
  • --tries=3 — Failed jobs are retried 3 times before being marked as failed.
  • autorestart=true — If the worker crashes, Supervisor restarts it automatically.

Load and start the worker:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*

# Check status
sudo supervisorctl status

Configure Laravel Scheduler

Laravel's task scheduler runs via a single cron entry. Add it with:

sudo crontab -e -u www-data

Add this line:

* * * * * cd /var/www/yourdomain.com && php artisan schedule:run >> /dev/null 2>&1

This runs every minute and Laravel's scheduler internally handles which tasks should actually execute based on their defined frequency.


14. Final Checks & Performance Optimizations

Verify Your Deployment

# Check PHP-FPM is running
sudo systemctl status php8.2-fpm

# Check Nginx is running
sudo systemctl status nginx

# Check MySQL is running
sudo systemctl status mysql

# Check Supervisor workers are running
sudo supervisorctl status

# Check fail2ban is running
sudo systemctl status fail2ban

# View your UFW rules
sudo ufw status verbose

Test Your Application

# Test Laravel can connect to the database
php artisan tinker
# Inside Tinker:
# DB::connection()->getPdo();
# Exit with Ctrl+C

# Check for any configuration errors
php artisan about

Set Up Log Rotation

Laravel logs can grow large in production. Set up log rotation so they don't fill your disk:

sudo nano /etc/logrotate.d/laravel
/var/www/yourdomain.com/storage/logs/*.log {
    daily
    missingok
    rotate 14
    compress
    notifempty
    create 0664 www-data www-data
    sharedscripts
}

Check Server Response Time

After all optimizations, test your server response time:

[On Linux terminal run:]
curl -o /dev/null -s -w "Time to first byte: %{time_starttransfer}s\n" https://yourdomain.com
[On windows powershell run:]
curl.exe -o null -s -w "Time to first byte: %{time_starttransfer}s\n" https://yourdomain.com

With OPcache, config caching, and route caching enabled, you should see times well under 300ms on a properly sized VPS.

Monitor Your Server

A few useful commands to keep in your toolkit:

# Real-time server resource usage
htop

# Check disk usage
df -h

# Check memory usage
free -h

# Watch Nginx access logs in real-time
sudo tail -f /var/log/nginx/yourdomain.com_access.log

# Watch Laravel logs in real-time
tail -f /var/www/yourdomain.com/storage/logs/laravel.log

# Check fail2ban banned IPs
sudo fail2ban-client status sshd

Further reading:


Wrapping Up

At this point, your Laravel application is running on a production VPS with a hardened server configuration: non-root SSH access, a custom SSH port, disabled ping responses, fail2ban monitoring, a locked-down firewall, properly configured Nginx or Apache virtual host, free SSL, supervised queue workers, and full caching optimization.

This isn't a "good enough for now" setup — this is production-grade infrastructure. The difference between this and a basic upload-and-hope deployment is the difference between a server that gets compromised in a week and one that runs reliably for years.

If any step gave you trouble, or you'd rather have someone handle this deployment professionally — that's exactly what I do. Check out my Laravel & MERN Stack Deployment Services or hire me directly through my Upwork profile for a fast, fully documented deployment.

Muhammad Zeeshan
Muhammad Zeeshan

Muhammad Zeeshan is a Full-Stack Web Developer and the founder of muhammadzeeshan.dev . He specializes in building secure and scalable web applications using Laravel, React.js, Node.js, and MySQL. Through the articles published here, Zeeshan shares practical insights from his experience in full-stack development, API design, and server deployment—helping developers and businesses create faster, smarter, and more reliable web solutions.