my docker web server setup for laravel - config in detail

 

As already announced, I have replaced Plesk with Docker for my websites. Although I now manage the pages via the terminal and no longer via a GUI, I am much more flexible with the setup: I can copy the containers at any time and start them on another server, or put a new page online more quickly, or change the web server or the PHP version for a particular page. Also the use of new features like Laravel Octane and Swoole are easy to realize. The reasoning behind this setup can be read on the following page: Docker vs Plesk, for running websites.

Docker Basics

Docker allows applications to be launched by command in a so-called container.
A container is an isolated environment independent of the operating system (OS):
When a container is first launched, Docker independently loads all the necessary sources
from the internet.
Docker can be installed on Windows, macOS or an Linux Distribution
In this post, I have summarized my current web server config files. For my setup, I packed all the necessary services for running a Laravel web server into a container, except for the database. For starting the processes I use supervisor.

I use the web server Nginx in both cases to deliver static content directly and not via the PHP web server. As webserver for the PHP application I tested two different variants: Once with php-fpm, once with swoole as web worker.

Variant php-fpm:

  • supervisord,
  • Nginx,
  • php-fpm,
  • redis und
  • cron

Variant swoole: 

  • supervisord,
  • Nginx,
  • swoole,
  • redis und
  • cron

Cron could also be started for Laravel via the host, so an entry that starts the Laravel scheduler every minute in the container of the web server.
I like the approach to start Cron also in the container a bit better, because the container for this setup does not have to be stored extra in the crontab of the host and works without extra configuration out-of-the-box.

Web server config Laravel website: variant with php-fpm

For Docker, I created the following config files:

docker-compose.yml

[+]
services:
  web:
    container_name: laravel_web
    build:
      context: .
      dockerfile: Dockerfile
    expose:
      - "80"   
  #For direct test access, remove "#" in the following 2 lines. Call: http://localhost:83 or http://ServerIP:83
   #ports:   
    #- "83:80" 
    restart: always

   #Labels for ReverseProxy, see: https://www.libe.net/en-traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.home.rule=Host(`laravel.domain.tld`)"      
      - "traefik.http.routers.home.entrypoints=web"
      - "traefik.http.routers.home.entrypoints=websecure"
      - "traefik.http.routers.home.tls.certresolver=myresolver"
      - "traefik.http.services.home.loadbalancer.server.port=80"
    volumes:
      - "./www:/var/www"
  mysql:
    image: 'mysql'
    container_name: laravel_mysql
    environment:
        MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
        MYSQL_DATABASE: '${DB_DATABASE}'
        MYSQL_USER: '${DB_USERNAME}'
        MYSQL_PASSWORD: '${DB_PASSWORD}'   
    restart: always
    volumes:
        - './db:/var/lib/mysql'
    healthcheck:
      test: ["CMD", "mysqladmin", "ping"]

#Without using a reverse proxy (https://www.libe.net/en-traefik) the webproxy network is likely to be missing
#and the following lines can be removed or commented out. Alternatively, the network can be created with "docker network create webproxy".      
networks:
  default:
    name: webproxy
    external: true

As subfolder for the database I use: "db" and for the website: "www".

In order to use an SSL certificate for the website, I put the Traefik reverse proxy upstream, so the docker-compose.yml includes the network "webproxy". I put the variables for the database DB_DATABASE etc. in the .env file of Laravel, these can be passed via the parameter "--env" when starting the container, the build process can be triggered with "--build" and so that the container is started in the background, use "-d" as parameter.

docker-compose --env-file ./www/.env up -d --build

Dockerfile

The following setup includes PHP-FPM and Swoole as a PHP extension. Depending on the web server setup, FPM or Swoole can be used, or the unused variant can be removed:

[+]
FROM ubuntu:20.04

ARG WWWGROUP

WORKDIR /var/www

ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
ENV PHP_VERSION 8.0

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Install Dependencies like in Laravel Sail:
RUN apt-get update \
    && apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev tesseract-ocr python2 \
    && mkdir -p ~/.gnupg \
    && chmod 600 ~/.gnupg \
    && echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
    && apt-key adv --homedir ~/.gnupg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E5267A6C \
    && apt-key adv --homedir ~/.gnupg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C300EE8C \
    && echo "deb http://ppa.launchpad.net/ondrej/php/ubuntu focal main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
    && apt-get update \
    && apt-get install -y php${PHP_VERSION}-cli php${PHP_VERSION}-dev \
       php${PHP_VERSION}-pgsql php${PHP_VERSION}-sqlite3 php${PHP_VERSION}-gd \
       php${PHP_VERSION}-curl php${PHP_VERSION}-memcached \
       php${PHP_VERSION}-imap php${PHP_VERSION}-mysql php${PHP_VERSION}-mbstring \
       php${PHP_VERSION}-xml php${PHP_VERSION}-zip php${PHP_VERSION}-bcmath php${PHP_VERSION}-soap \
       php${PHP_VERSION}-intl php${PHP_VERSION}-readline \
       php${PHP_VERSION}-msgpack php${PHP_VERSION}-igbinary php${PHP_VERSION}-ldap \
       php${PHP_VERSION}-gmp php${PHP_VERSION}-mbstring php${PHP_VERSION}-redis \
    && php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
    && curl -sL https://deb.nodesource.com/setup_15.x | bash - \
    && apt-get install -y nodejs \
    && curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
    && echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
    && apt-get update \
    && apt-get install -y yarn \
    && apt-get install -y mysql-client \
    && apt-get -y autoremove \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# add nginx
RUN apt-get update && apt-get install -y software-properties-common && apt-add-repository ppa:nginx/stable -y && apt-get install -y php${PHP_VERSION}-fpm nginx && \
    mkdir -p /run/php && chmod -R 755 /run/php && \
    sed -i 's|.*listen =.*|listen=9000|g' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i 's|.*error_log =.*|error_log=/proc/self/fd/2|g' /etc/php/${PHP_VERSION}/fpm/php-fpm.conf && \
    sed -i 's|.*access.log =.*|access.log=/proc/self/fd/2|g' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i 's|.*user =.*|user=root|g' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i 's|.*group =.*|group=root|g' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i -e "s/;catch_workers_output\s*=\s*yes/catch_workers_output = yes/g" /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i 's#.*variables_order.*#variables_order=EGPCS#g' /etc/php/${PHP_VERSION}/fpm/php.ini && \
    sed -i 's#.*date.timezone.*#date.timezone=UTC#g' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i 's#.*clear_env.*#clear_env=no#g' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i 's#.*pm = dynamic*#pm = ondemand#g' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i '/pm.max_children = /c\pm.max_children = 50' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i '/pm.process_idle_timeout = /c\pm.process_idle_timeout = 60s' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf && \
    sed -i '/pm.max_requests = /c\pm.max_requests = 15' /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf 

# add swoole
RUN pecl install --configureoptions 'enable-sockets="no" enable-openssl="no" enable-http2="no" enable-mysqlnd="no" enable-swoole-json="no" enable-swoole-curl="no"' swoole
#You should add "extension=swoole.so" to php.ini
RUN echo "extension=swoole.so" > /etc/php/${PHP_VERSION}/cli/conf.d/99-php.ini
RUN echo "extension=swoole.so" > /etc/php/${PHP_VERSION}/fpm/conf.d/99-php.ini

# add redis
RUN apt-get update && apt-get install -y redis-server

# add cron
RUN apt-get install -y cron
RUN echo "* * * * * root /usr/bin/php /var/www/artisan schedule:run >> /dev/null 2>&1" > /etc/cron.d/laravel-scheduler
RUN chmod 644 /etc/cron.d/laravel-scheduler

# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www

#for supervisor to start the right version:
RUN mv /usr/sbin/php-fpm${PHP_VERSION} /usr/sbin/php-fpm 

COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/${PHP_VERSION}/cli/conf.d/99-php.ini
COPY php.ini /etc/php/${PHP_VERSION}/fpm/conf.d/99-php.ini
COPY nginx.conf /etc/nginx/nginx.conf
COPY mysql.cnf /etc/mysql/conf.d/mysql.cnf

CMD /usr/bin/supervisord
EXPOSE 80

supervisord.conf

To monitor and start the necessary processes I use supervisor, here my supervisord.conf for the php-fpm variant:

[+]
[supervisord]
nodaemon=true
user=root
logfile=/var/log/supervisord.log
pidfile=/var/run/supervisord.pid

[program:nginx]
command=/usr/sbin/nginx
autostart = true
autorestart=true
stdout_logfile=/dev/nginx-stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/nginx-stderr
stderr_logfile_maxbytes=0

[program:php-fpm]
command=/usr/sbin/php-fpm -R --nodaemonize
autostart=true
autorestart=true
stdout_logfile=/var/log/php-fpm-stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/php-fpm-stderr.log
stderr_logfile_maxbytes=0
exitcodes=0

[program:redis]
command=redis-server
autostart=true
autorestart=true
stdout_logfile=/var/log/redis-stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/redis-stderr.log
stderr_logfile_maxbytes=0
exitcodes=0

[program:cron]
command=cron
autostart=true
autorestart=true
stdout_logfile=/var/log/cron-stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/cron-stderr.log
stderr_logfile_maxbytes=0
exitcodes=0

nginx.conf

The Nginx configuration distinguishes between static content, PHP pages and depending on a certain cookie, whether static cached files should be used or the request should be sent to php-fpm. For caching the static files I use the Laravel package page-cache (see also: JosephSilber/page-cache and website stress test - performance measure requests/second). 

[+]
#worker_processes  2;
daemon off;
user root;

#pid        logs/nginx.pid;
events {
    worker_connections  1024;
}

error_log /dev/stdout info;

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    sendfile on;
    keepalive_timeout  65;
    gzip  on;
    gzip_vary on;
    gzip_min_length 10240;
    gzip_proxied any;
    gzip_disable msie6;
    gzip_comp_level 1;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    error_log /dev/stdout;

    server {
        listen 80 ;

        server_name _;

#        auth_basic           "Test Area";
#        auth_basic_user_file /var/www/.htpasswd;
        root /var/www/public;


        #redirect index.php
        if ($request_uri ~* "^/index\.php/(.*)") {
            return 301 /$1;
        }


        location ~* ^/storage/.*\.(js|css|png|jpg|jpeg|gif|svg|ico)$ {
            expires 7d;
            add_header Cache-Control "public, no-transform";
        }


        #set variables for Cache...
        set $shouldusecache4root @usecache4root;
        set $shouldusecache4pages @usecache4pages;
        if ($http_cookie ~* "nocache=YES(?:;|$)") {
            set $shouldusecache4root @nocache4root;
            set $shouldusecache4pages @nocache4pages;
        }
        if ($query_string != "") {
            set $shouldusecache4root @nocache4root;
            set $shouldusecache4pages @nocache4pages;
        }
        #hack locations...
        location = / {
            try_files /dev/null $shouldusecache4root;
        }
        location / {
            try_files /dev/null $shouldusecache4pages;
        }
        #use named locations from hacked locations...
        location @usecache4root {
            try_files /page-cache/pc__index__pc.html /index.php?$is_args$args;
        }
        location @nocache4root {
            try_files $uri /index.php?$is_args$args;
        }

        location @usecache4pages {
            try_files $uri $uri/ /page-cache/$uri.html /page-cache/$uri.json /index.php$is_args$args;
        }
        location @nocache4pages {
            try_files $uri $uri/ /index.php$is_args$args;
        }
        

        location ~ ^/index\.php(/|$) {
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_split_path_info ^(.+\.php)(/.*)$;
            include /etc/nginx/fastcgi_params;

            fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
            fastcgi_param DOCUMENT_ROOT $realpath_root;

            # Prevents URIs that include the front controller. This will 404:
            # http://domain.tld/app.php/some-path
            # Remove the internal directive to allow URIs like this
            internal;
        }

    }
}

php.ini

[PHP]
post_max_size=512M
upload_max_filesize=512M
variables_order=EGPCS
max_execution_time=60

[opcache]
opcache.enable=1
; 0 means it will check on every request
; 0 is irrelevant if opcache.validate_timestamps=0 which is desirable in production
opcache.revalidate_freq=0
opcache.validate_timestamps=1
opcache.max_accelerated_files=30000
opcache.memory_consumption=256
opcache.max_wasted_percentage=10
opcache.interned_strings_buffer=16
opcache.fast_shutdown=1

mysql.cnf - Tuning mysql Memory Usage

To run multiple websites, the memory usage of mysql can be optimized by disabling the performance scheme: Instead of more than 500 MB, mysql only needs about 300 MB / container for my website without the performance schema

[mysqld]
performance_schema = 0
expire_logs_days = 2
key_buffer_size = 5M
innodb_buffer_pool_size = 60M

Laravel Swoole and Octane: Variant with swoole

To use Swoole as a web server for Laravel Octane, you only need to modify the php.ini, nginx.conf and supervisord.conf files:

supervisord.conf

[+]
[supervisord]
nodaemon=true
user=root
logfile=/var/log/supervisord.log
pidfile=/var/run/supervisord.pid

[program:nginx]
command=/usr/sbin/nginx
autostart = true
autorestart=true
stdout_logfile=/dev/nginx-stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/nginx-stderr
stderr_logfile_maxbytes=0

[program:octane]
command=/usr/bin/php -d variables_order=EGPCS /var/www/artisan octane:start --server=swoole --watch --host=0.0.0.0 --port=8000
autostart=true
autorestart=true
stdout_logfile=/var/log/php-fpm-stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/php-fpm-stderr.log
stderr_logfile_maxbytes=0
exitcodes=0

[program:redis]
command=redis-server
autostart=true
autorestart=true
stdout_logfile=/var/log/redis-stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/redis-stderr.log
stderr_logfile_maxbytes=0
exitcodes=0

[program:cron]
command=cron
autostart=true
autorestart=true
stdout_logfile=/var/log/cron-stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/cron-stderr.log
stderr_logfile_maxbytes=0
exitcodes=0

php.ini

For Swoole it is necessary to load the Swoole extension in the php.ini file:

[PHP]
post_max_size=512M
upload_max_filesize=512M
variables_order=EGPCS
max_execution_time=240
memory_limit = 512M

[opcache]
opcache.enable=1
opcache.revalidate_freq=0
opcache.validate_timestamps=1
opcache.max_accelerated_files=30000
opcache.memory_consumption=256
opcache.max_wasted_percentage=10
opcache.interned_strings_buffer=16
opcache.fast_shutdown=1
opcache.jit_buffer_size=100M
opcache.jit=1255

extension=swoole.so

Nginx.conf

Again, the Nginx configuration distinguishes between static content, PHP pages and depending on a specific cookie, whether static cached files should be used or the request should be sent to Swoole. For caching the static files I use the Laravel package page-cache (see also: JosephSilber/page-cache and website stress test - performance measure requests/second). 

[+]
daemon off;
user root;

#pid        logs/nginx.pid;
events {
    worker_connections  1024;
}

error_log /dev/stdout info;

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    sendfile on;
    keepalive_timeout  65;
    gzip  on;
    gzip_vary on;
    gzip_min_length 10240;
    gzip_proxied any;
    gzip_disable msie6;
    gzip_comp_level 1;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;


    error_log /dev/stdout;

    server {
        listen 80 ;

        server_name _;

        auth_basic           "Test Area";
        auth_basic_user_file /var/www/.htpasswd;

        root /var/www/public;

        #redirect index.php
        if ($request_uri ~* "^/index\.php/(.*)") {
            return 301 /$1;
        }

        location ~* ^/storage/.*\.(js|css|png|jpg|jpeg|gif|svg|ico)$ {
            expires 7d;
            add_header Cache-Control "public, no-transform";
        }

        #set variables for Cache...
        set $shouldusecache4root @usecache4root;
        set $shouldusecache4pages @usecache4pages;
        if ($http_cookie ~* "nocache=YES(?:;|$)") {
            set $shouldusecache4root @nocache4root;
            set $shouldusecache4pages @nocache4pages;
        }
        if ($query_string != "") {
            set $shouldusecache4root @nocache4root;
            set $shouldusecache4pages @nocache4pages;
        }

        #hack locations...
        location = / {
            try_files /dev/null $shouldusecache4root;
        }
        location / {
            try_files /dev/null $shouldusecache4pages;
        }
        #use named locations from hacked locations...
        location @usecache4root {
            try_files /page-cache/pc__index__pc.html @swoole;
        }
        location @nocache4root {
            try_files $uri @swoole;
        }

        location @usecache4pages {
            try_files $uri $uri/ /page-cache/$uri.html /page-cache/$uri.json @swoole;
        }
        location @nocache4pages {
            try_files $uri $uri/ @swoole;
        }
        
        location @swoole {
            set $suffix "";
            if ($uri = /index.php) {
                set $suffix ?$query_string;
            }
            proxy_http_version 1.1;
            proxy_connect_timeout 60s;
            proxy_send_timeout 60s;
            proxy_read_timeout 120s;
            proxy_set_header Connection "keep-alive";
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Real-PORT $remote_port;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header Scheme $scheme;
            proxy_set_header Server-Protocol $server_protocol;
            proxy_set_header Server-Name $server_name;
            proxy_set_header Server-Addr $server_addr;
            proxy_set_header Server-Port $server_port;
            proxy_set_header X-Requested-With $http_x_requested_with;
            proxy_pass http://127.0.0.1:8000$suffix;
            proxy_cookie_path / /;
        }

    }
}

Server Tuning

In addition to the actual Docker installation, here's what I changed on the web server:

Test environment

To create a test environment with Docker, the configuration can additionally be run on the same host or on a different host with an alternative DNS name. After the dependencies and packages are stored in the Docker container, the test environment is not only similar, but identical to the production web server.

Other contributions to Docker, see: /topic/docker

positive Bewertung({{pro_count}})
Rate Post:
{{percentage}} % positive
negative Bewertung({{con_count}})

THANK YOU for your review!

Updated: 2023-07-04 von Bernhard | Übersetzung Deutsch |🔔 | Comments:0

Zigbee2MQTT vs deCONZ and Phoscon | Container | Send signal messages via script / API: signal-cli-rest-api

Top articles in this section


Send signal messages via script / API: signal-cli-rest-api

The well-known Signal Messenger can besides the app also be used via command line or from other systems. The signal-cli-rest-api, which is available in the form of a Docker container, offers a simple option for this. For the API to work, it has to be coupled via the Signal app beforehand.


Nextcloud Server Docker | Setup + https: Let's Encrypt [ssl]

To synchronize contacts, appointments, and photos of my NAS, I tested Nextcloud and thus turned my back on other cloud providers for my private data. Thanks to Docker, the installation is easier and more flexible than ever, allowing Nextcloud to run on almost any hardware.


Commissioning Zigbee2MQTT in Docker - step by step

Zigbee2MQTT is an open source Zigbee bridge which can be easily integrated into existing smart home solutions thanks to the MQTT network protocol. As an example, Zigbee2MQTT combined with MQTT broker Mosquitto and Home Assistant can collect, display, record and control data from Zigbee devices. The setup described here uses Docker as a base. Manufacturer's website: https://www.zigbee2mqtt.io

Questions / Comments


By continuing to browse the site, you agree to our use of cookies. More Details