If you visit my old website, you’ll be greeted by a welcome page. As an amateur Linux operations engineer, I sometimes do stupid things like accidentally deleting things without a backup. This post is basically a guide for my future self in case I break my server again.
The purpose of using a VPS.
The point of using a VPS and not a hosting platform is that fewer limitations exist. In my case, I mainly use my VPS as a proxy so my family and friends, who live in a more internet-restricted country, can communicate like people living in the rest of the world.
I use a protocol called VMess, which is a protocol for encrypted communications and includes both inbound and outbound proxy. There is a tool called
x-ui with multi-user support for VMess and other protocols, and it is simple to set up.
But before that, I have to prepare everything else first.
Update the repositories and install important packages.
Here is a list of stuff that I want to install right now:
nginxfor web hosting and reverse proxying
certbotand SSL stuff
dockerbecause I want to set up consistent environments for development rapidly.
x-ui, of course, for multi-user support proxy.
$ sudo apt update $ sudo apt install snapd nginx $ sudo snap install core $ sudo snap refresh core $ sudo snap install --classic certbot $ sudo ln -s /snap/bin/certbot /usr/bin/certbot
What I’ve done here:
First, I prepared my server by updating the package sources list with the latest versions of the packages in the repositories. Then, I installed
snap. Finally, I created a symbolic link for my
certbot installation to make it work.
Add repository and install Docker.
I will use Docker because it’s an elegant way to set up almost anything. It’s simple, and that’s what I care about. They have good guides on their website, but it’s long, so I’m going to shorten it and only write what works for me.
First, install packages to allow apt to use a repository over HTTPS, and add Docker’s official GPG key.
$ sudo apt install \ ca-certificates \ curl \ gnupg \ lsb-release
$ sudo mkdir -p /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
Next, use this command to set up the repository.
$ echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Now we can update Docker’s package list, install Docker Engine,
containerd, and Docker Compose.
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
We can test our Docker installation with the following command.
$ sudo docker run hello-world
Manage Docker as a non-root user.
Although we have Docker installed, we need root permission to use it. Otherwise, we will be greeted with errors like this:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.
We can create a Unix group called
docker and add users to it, so when the Docker daemon starts, it creates a Unix socket accessible by members of the
docker group. I’m using Ubuntu, so the
docker group is already made for me. I can simply add a user to it.
$ sudo usermod -aG docker $USER
Refresh to activate the changes.
$ newgrp docker
Now test again without
$ docker run hello-world
If you see Docker up and running, you are good to go.
Configure Docker to start on boot.
systemd to manage which services start when the system boots. We can simply add Docker and
$ sudo systemctl enable docker.service $ sudo systemctl enable containerd.service
From now on, Docker will start when the system boots. It is very handy for someone lazy like me.