Sunday, March 27, 2022

Gagal Masuk Ke OnlyFans, Padahal User dan Password Benar

 Dalam salah satu halaman supportnya, OnlyFans menjawab jika kalian gagal login ke OnlyFans padahal username dan password sudah benar, maka:

Harap periksa email Anda untuk melihat apakah akun Anda telah ditandai sebagai mencurigakan atau dibatasi. Jika Anda memiliki pelanggaran Persyaratan Layanan, akun Anda mungkin ditutup. Jika Anda memiliki akun sebelumnya di OnlyFans dengan pelanggaran, akun baru Anda mungkin dibatasi atau ditutup. Jika Anda yakin telah salah ditandai, silakan hubungi dukungan ( dengan bukti atau informasi yang memungkinkan kami meninjau situasi Anda.

Saturday, March 26, 2022

Cara Menemukan Seseorang di OnlyFans

OnlyFans adalah jaringan media sosial yang relatif baru yang telah meningkat selama beberapa waktu sekarang. Meskipun tidak sepopuler situs lain seperti Facebook, Twitter, atau LinkedIn, ia memiliki fitur uniknya sendiri. Khususnya, Anda harus membayar untuk melihat konten yang dihasilkan oleh orang lain. Ide ini sangat menarik bagi sebagian besar pembuat konten karena memungkinkan mereka untuk memonetisasi akun mereka dan memiliki kontrol lebih besar atas konten mereka.

Dalam upaya untuk lebih menjaga privasi dan keamanan pembuat konten, OnlyFans memiliki tombol pencarian yang terkenal sangat membatasi yang menyimpan hasil pencarian pada tali yang ketat. Meskipun tujuannya adalah untuk mempromosikan privasi dan mendorong lebih banyak pembuat untuk bergabung, hal ini membuat sangat sulit untuk menemukan profil seseorang.

Namun, Anda masih dapat menemukan profil siapa pun, berkat beberapa solusi.

Dalam artikel ini, kami akan menunjukkan kepada Anda bagaimana melakukannya.

Cara Berlangganan ke Akun OnlyFans

Ada dua jenis akun di OnlyFans: akun pengguna dan akun pembuat konten. Jika Anda berlangganan akun OnlyFans, maka Anda adalah pengguna. Sementara beberapa akun pembuat OnlyFans tidak mengenakan biaya untuk melihat konten mereka, yang lain dapat mengenakan biaya bulanan $50.

Dalam artikel ini, kami akan menunjukkan cara berlangganan akun OnlyFans di berbagai perangkat. Kami juga akan membahas proses berlangganan akun OnlyFans tanpa kartu kredit atau debit.

Cara Berlangganan Akun OnlyFans Tanpa Menggunakan Kartu Kredit Pribadi Anda

Sebagai salah satu platform paling populer di industri hiburan dewasa, OnlyFans memiliki 130 juta pelanggan dan 2 juta kreator. Meskipun ada sedikit kontroversi seputar aplikasi ini, ada akun pengguna baru yang dibuat setiap hari. Jika Anda ingin membuat akun pengguna, Anda harus melakukannya di situs OnlyFans. Ingatlah bahwa Anda harus berusia minimal 18 tahun untuk membuat akun OnlyFans.

Wednesday, March 23, 2022

What is .htaccess File for?

An .htaccess file is a part of what controls the high-level configuration of your website. You edit the contents of your .htaccess file to enable and disable certain features of your server software without editing in the server configuration file directly. It’s a pretty easy way to make important changes, but you have to be really careful that you’re editing the code correctly. One mistake in your code can cause a lot of problems for users.

What is 301 Permanent Redirect?

A 301 redirect is a permanent redirect. When a user tries to access an old URL, the server sends their browser the 301-Permanently Moved status code and sends them to another page. This is useful for site owners and users because it means they are directed to the next most relevant page.

Are you looking for Linux Distribution for Apple Silicon? Meet: Asahi Linux

Asahi Linux is a project and community with the goal of porting Linux to Apple Silicon Macs, starting with the 2020 M1 Mac Mini, MacBook Air, and MacBook Pro. Apple silicon is a series of system on a chip (SoC) and system in a package (SiP) processors designed by Apple Inc., mainly using the ARM architecture.

Asahi Linux goal is not just to make Linux run on these machines but to polish it to the point where it can be used as a daily OS. Doing this requires a tremendous amount of work, as Apple Silicon is an entirely undocumented platform. In particular, we will be reverse engineering the Apple GPU architecture and developing an open-source driver for it.

Sunday, March 13, 2022

Golang REST API Tutorial (Part I)

On this series, we gonna build the REST API with Golang. As a basis, we'll be using Goyave.  Goyave is an opinionated Golang REST API framework aiming at cleanliness, fast development and power. Goyave applications stay clean and concise thanks to minimalist function calls and route handlers. The framework gives you all the tools to create an easily readable and maintainable web applications, which let you concentrate on the business logic. Although Goyave is a full package requiring very few setup and that handles many things for you, such as headers or marshaling, this characteristic doesn't compromise on your freedom of code.

You can read more about goyave framework from here:

As an example, we'll be build blog API. The requirements is:

  • Go 1.16+
  • Go modules

the directory structure

├── database
│   ├── model                // ORM models
│   |   └── ...
│   └── seeder               // Generators for database testing
│       └── ...
├── http
│   ├── controller           // Business logic of the application
│   │   └── ...
│   ├── middleware           // Logic executed before or after controllers
│   │   └── ...
│   ├── validation
│   │   └── validation.go    // Custom validation rules
│   └── route
│       └── route.go         // Routes definition
├── resources
│   └── lang
│       └── en-US            // Overrides to the default language lines
│           ├── fields.json
│           ├── locale.json
│           └── rules.json
├── test                     // Functional tests
|   └── ...
├── .gitignore
├── .golangci.yml            // Settings for the Golangci-lint linter
├── config.example.json      // Example config for local development
├── config.test.json         // Config file used for tests
├── go.mod
└── main.go                  // Application entrypoint
Running the project 

First, make your own configuration for your local environment. You can copy config.example.json to config.json. 
 Run go run main.go in your project's directory to start the server.

Friday, March 11, 2022

How to fix zsh: command not found: php Error in MacOS Monterey

I have been using php with mamp on mac for a year even with old versions of MacOS, since I installed MacOS Monterrey if I type php on the terminal I get a message: zsh: command not found: php. After some googling, I find out MacOS doesn't include PHP. You need Homebrew to install PHP again.
brew install php

Thursday, March 10, 2022

How to Synchronize Multiple Linux Server with lsyncd

Lsyncd is a free, open-source utility that can be downloaded and configured with no charge for the software or use.

Modern sysadmins use lsyncd for several scenarios such as:
- Load balancing — this works best when the traffic levels are relatively low (or intermittent), or new and modified content is not frequently accessed.
- High availability — keeping in mind that there are multiple aspects of high availability. Using lsyncd to push data to another host that can take over in the event of a hardware failure is an excellent use-case.
- Real-time offsite backups — a great way to keep a running record of the files and folders that have changed will ensure we push the changes to a second host for backup purposes.

Install EPEL Repo

The first step is to add the EPEL repository which contains the lsyncd package.
root@server ~]# yum -y install epel-release
If everything goes well, you will see a “Complete!” message. Then you need to make sure the EPEL repo is enabled. Open the epel.repo file as follows:
[root@server ~]# vi /etc/yum.repos.d/epel.repo
Change the “enabled=0” to “enabled=1” as follows:
[epel] name=Extra Packages for Enterprise Linux 7 - $basearch #baseurl=$basearch metalink=$basearch

Install Lsyncd

Proceed to install the lsyncd package using the following command:
[root@server ~]# yum -y install lsyncd

Configure SSH on Master

At this point we need to configure SSH on Master server so that it can push files to the slave/backup server without requiring password authentication or user intervention. To do so, we will create SSH keys on the master server as follows:
[root@server ~] # ssh-keygen -t rsa
Upon execution of the command above you will be prompted with several questions. You can use the defaults. When prompted to enter passphrase, hit Enter to proceed with empty passphrase.
root@alt [~]# ssh-keygen -t rsa -C "email@domain.local"
Once the SSH keys are generated, transfer the public key (the file ending with .pub) to the slave server. In this way, master server will authenticate with the slave without the need for password. Transfer the ssh key with the following command:
[root@server ~]# ssh-copy-id email@domain.local
NOTE: It is normal if when using the above command you are prompted to authenticate via password. This is because the SSH key is not yet in place.
Before proceeding to the next step, verify that the passwordless authentication works. From the master server, try to ssh to the slave server as follows:
[root@server ~]# ssh email@domain.local

Configure Lsyncd on Master

We are ready to configure the Lsyncd on Master server. The settings we will modify are the following:
- Log files location
- Frequency to write status file
- Synchronization method
- Source folder (in master) we wish to sync
- Destination folder (in slave)
Firstly, open the lsyncd.conf file to start editing it.
root@server [~]# vi /etc/lsyncd.conf
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd-status.log",
statusInterval = 10
## Slave server configuration ##
sync {
rsync = {
compress = true,
acls = true,
verbose = true,
owner = true,
group = true,
perms = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
Now that Lsyncd is installed and configured, along with the SSH keys for password-less authentication, execute the following commands to start and enable the lsyncd service.
[root@server lsyncd]# systemctl start lsyncd
[root@server lsyncd]# systemctl enable lsyncd
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/lsyncd.service

Verify Lsyncd is Working

Check both your master and slave directories (/var/www/html/) are empty.
[root@server ~]# cd /var/www/html
[root@server html]# ls -luah
total 0
[root@server html]#
[root@slave-server ~]# cd /var/www/html
[root@slave-server html]# ls -luah
total 0
[root@slave-server html]#
Create an empty file on the master server named index.html. You can quickly do so by using the touch command as follows:
[root@server html]# touch index.html
After 15 seconds, lsyncd will notice the changes and push the new file to the slave server. We can monitor the lsyncd log on the master server to verify the transfer has occurred, and what files were transferred across.
[root@server ~]# cd /var/log/lsyncd
[root@server lsyncd]# cat lsyncd.log
Tue Feb 22 09:02:18 2022 Normal: Rsyncing list
Tue Feb 22 09:02:20 2022 Normal: Finished (list): 0
[root@server lsyncd]#
Now, check the /var/www/html/ directory on the slave server to confirm the new index.html file has been pushed successfully.
[root@slave-server ~]# ls -luah /var/www/html
total 1
-rw-r--r-- 1 root root 10 Feb 22 09:04 index.html
[root@slave-server ~]#

How to Fix CentOS 8 Error: ‘appstream’: Cannot prepare internal mirrorlist

If you got the error message is: Failed to download metadata for repo:
[root@autocontroller ~]# yum update
CentOS-8 - AppStream 70 B/s | 38 B 00:00
Error: Failed to download metadata for repo 'AppStream': Cannot prepare internal mirrorlist: No URLs in mirrorlist
CentOS Linux 8 had reached the End Of Life (EOL) on December 31st, 2021. that’s means CentOS 8 will no longer receive development resources from the official CentOS project. After Dec 31st, 2021, if you need to update your CentOS, you need to change the mirrors to where they will be archived permanently.

So just follow the steps below to do that: Go to the /etc/yum.repos.d/ directory.
cd /etc/yum.repos.d/
Run the below commands to hash the mirror-list in all yum.repos.d files then replace the existed Baseurl with the
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=|baseurl=|g' /etc/yum.repos.d/CentOS-*
Then run yum update or install any package you want
yum update -y
Thank you...

How to Install Firefox 98 on Ubuntu 22.04

This tutorial will be helpful for beginners to download and install Firefox 98.0 on Ubuntu 20.04 LTS, Ubuntu 22.04.
Firefox or Mozilla Firefox is a free and open-source web browser developed by the Mozilla foundation and generally utilized by thousands and thousands of individuals in their daily actions. It is a Cross-platform web browser available for Android, Windows, macOS, iOS, and Linux systems.
Firefox 98.0 Changelog
- You can now delete downloaded files directly from the download panel and other download views using the context menu. - The use of webRequest used to cause addons to start early during Firefox startup. This has changed to only use webRequest blocking calls. non-blocking calls no longer cause an early startup for addons. - The HTML element already available on pre-release channels will become available on the release channel in version 98. For the complete changelog refer to the release notes

Install Firefox 98.0 on Ubuntu / Linux Mint

The latest version of Firefox 98.0 will be updated to the repositories, just update the repository and install it using the below command
sudo apt update && sudo apt install firefox
Thank you!

How to Create Docker Swarm with Multipass & Virtualbox

On this tutorial, we're gonna creating a docker swarm with tool called 'multipass'. You can install this tool (either in Mac or Linux) easly with brew install multipass. Another software required is Virtualbox.
To install multipass in macOS, you can follow my tutorial here:
for this little experiment, you will create a minimal setup of Docker Swarm, as per my understanding one manager and two worker nodes will be sufficient along with one server for NFS shared storage.
$ multipass launch -n manager 
$ multipass launch -n worker1 
$ multipass launch -n worker2 
$ multipass launch -n nfsserver
running above commands with defaults, created required VMs for this POC.
$ multipass list 
Name                    State             IPv4             Image manager                 Running      Ubuntu 20.04 LTS 
nfsserver               Running      Ubuntu 20.04 LTS 
worker1                 Running      Ubuntu 20.04 LTS 
worker2                 Running      Ubuntu 20.04 LTS  
Install Docker CE in manager and worker nodes is pretty straight forward, run below scripts as root user
$ apt-get update && apt-get upgrade -y 
$ apt-get remove docker docker-engine -y 
$ apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common python-setuptools -y 
$ curl -fsSL | sudo apt-key add - 
$ add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable" $ apt-get update 
$ apt-get install docker-ce -y 
$ systemctl enable docker 
$ systemctl restart docker 
$ apt install python3-pip -y 
$ pip install docker-compose 
$ usermod -aG docker ubuntu
Login to manager, worker1 and worker2 shell and verify if docker is properly installed, by executing below command from each VM.
docker --version Docker version 20.10.8, build 3967b7d

Initialize docker manager instance
$ docker swarm init --advertise-addr
run join command from each worker VM
$ docker swarm join --token
check if the setup is working as expected
$ docker node ls 
jklhgjfturtaiuskghmv *   manager    Ready     Active         Leader           20.10.8 
jkhgutyikjhgmnjghmvn     worker1    Ready     Active                          20.10.8 
cfgrtdyfhgvncfghggvh     worker2    Ready     Active                          20.10.8
try installing an nginx web server in this cluster and verify if container deployment is working as expected
$ docker service create --name my-web --publish 8080:80 --replicas 2 nginx 
$ docker service ls ID             NAME      MODE         REPLICAS   IMAGE          PORTS s9eabxqjgu98   my-web    replicated   2/2        nginx:latest   *:8080->80/tcp
above command deploys nginx webserver with 2 replicas, check where these 2 containers are deployed to
ubuntu@manager:~$ docker service ps my-web ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS 
jghasgghjda   my-web.1   nginx:latest   worker2   Running         Running 48 seconds ago 
fdreytfghca   my-web.2   nginx:latest   manager   Running         Running 56 seconds ago
enjoy it!

How to Install multipass on MacOS Monterey

The default backend on macOS is hyperkit, wrapping Apple’s Hypervisor.framework. You need macOS Yosemite, version 10.10.3 or later installed on a 2010 or newer Mac.

Multipass also supports using VirtualBox as a virtualization provider. You can download the latest version and check the requirements on the VirtualBox website.

sudo multipass set local.driver=virtualbox

Installing Multipass

To install Multipass on macOS, you have two options: the installer package or brew:

Download the latest installer from our GitHub releases page - it’s the .pkg package. If you want Tab completion on the command line, install bash-completion from brew first. Activate the downloaded installer and it will guide you through the steps necessary. You will need an account with Administrator privileges to complete the installation. Multipass installer on macOS There’s a script to uninstall:
sudo sh "/Library/Application Support/com.canonical.multipass/"
Brew Have a look at on instructions to install Brew itself. Then, it’s a simple:
brew install --cask multipass
To uninstall:
$ brew uninstall multipass
# or
$ brew uninstall --zap multipass # to destroy all data, too

Running multipass for the first time

Once installed, open the Terminal app and you can use multipass launch to create your first instance. With multipass version you can check which version you have running:
$ multipass version
multipass 1.0.0+mac
multipassd 1.0.0+mac

Wednesday, March 9, 2022

How to Limit Firefox/Google Chrome CPU and RAM

Google Chrome and Firefox web browsers make extensive use of Memory and CPU utilization when multiple tabs are opened. We can limit the CPU and RAM for those browser, or other application by using systemd.

Using systemd’s transient scope units one can allocate a certain amount of memory and CPU shares to Firefox and chrome web-browser applications. systemd’s transient Units are only allowed for a super user(root) hence the need is to first allow the user or group that wants this feature.

Add the following polkit rule in /etc/polkit-1/rules.d/60-systemd-manage.rules file. The following rule makes sure that the user ‘test’ is allowed to start systdmd Units. Change the username of your choice.
polkit.addRule(function(action, subject) {
    if ( == "org.freedesktop.systemd1.manage-units" &&
        subject.user == "test") {
            return polkit.Result.YES;
Alternatively, a group of users can be granted the same privileges through the same rule with just a little modification. Make sure the user is part of the ‘admin’ group.
polkit.addRule(function(action, subject) {
    if ( == "org.freedesktop.systemd1.manage-units" &&
        subject.isInGroup("admin")) {
            return polkit.Result.YES;
Now login to the test user account and validate if the user can start and stop systemd services.
$ systemctl restart sshd
  [test@localhost ~]$ systemd-run --scope sleep 30
    Running scope as unit run-2845.scope.
Now modify the Gnome Launcher file of firefox or chrome from /usr/share/applications directory. Modify the Exec parameter as below to set 5G Memory limit and give 200 CPU cycles limit for to firefox and chrome. Generally 1024 CPU cycles is equivalent to 1 CPU. Giving 2048 CPU cycles would allow chrome and Firefox to use two CPUs if required.

Firefox: /usr/share/applications/firefox.desktop
Exec=/home/test/firefox/firefox %u

Exec=systemd-run --scope -p CPUShares=200 -p MemoryLimit=5G /home/test/firefox/firefox %u
Chrome: /usr/share/applications/google-chrome.desktop From:
#Exec=/usr/bin/google-chrome-stable %U
Exec=systemd-run --scope -p CPUShares=200 -p MemoryLimit=5G /usr/bin/google-chrome-stable %U
Exec=systemd-run --scope -p CPUShares=200 -p MemoryLimit=5G /usr/bin/google-chrome-stable
Exec=systemd-run --scope -p CPUShares=200 -p MemoryLimit=5G /usr/bin/google-chrome-stable --incognito
Now logout from the desktop session and then re-login to validate the feature.

How to Configure Gitlab & Docker Dind as CI/CD Platform

On this tutorial, we will using gitlab as CICD platform. And, we are also using docker dind.
First Install repository. Update apt package.
$ sudo apt-get update
$ sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
2. Add docker’s GPG key.
$ curl -fsSL | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
3. Set up stable docker repository.
$ echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install docker engine

Update apt package and install docker engine.
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli
2. Config docker start on boot.
$ sudo systemctl enable docker.service
$ sudo systemctl enable containerd.service
Config docker access remote
Open docker.sevice with editor.
$ sudo systemctl edit docker.service
2. Place this below code value in docker.service.
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://
3. Save file
4. Reload systemctl configuration.
5. Restart docker.
$ sudo systemctl restart docker.service
6. Check docker configuration with ss confirm dockerd listening port.
$ sudo ss -lntp | grep dockerd

Install gitlab runner

1. Download the binary for your system.
$ sudo curl -L — output /usr/local/bin/gitlab-runner
2. Give it permission to execute.
$ sudo chmod +x /usr/local/bin/gitlab-runner
3. Create a GitLab Runner user.
$ sudo useradd — comment ‘GitLab Runner’ — create-home gitlab-runner — shell /bin/bash
4. Install and run as a service.
$ sudo gitlab-runner install — user=gitlab-runner — working-directory=/home/gitlab-runner
sudo gitlab-runner start
5. Register runner.
$ sudo gitlab-runner register --url --registration-token $REGISTRATION_TOKEN

Config gitlab runner

Config /etc/gitlab-runner/config.toml
$ sudo nano /etc/gitlab-runner/config.toml
2. Add this below line for each runner in [runner.docker]
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
network_mode = "host"
And thats all folks! try with your own responsibility.

How to use Rstudio with multiple versions of R

Installing R and maintaining its the different versions are too much complicated. Almost everytime you install, Rstudio will downgrade your R versions and causes more problems than solutions.

And this is a simple solution for you:

#1. Install Conda

wget -O \
&& chmod +x && bash -b -p miniconda

base_dir=$(echo $PWD)

export PATH=$base_dir/miniconda/bin:$PATH
source ~/.bashrc
echo -e "$base_dir/miniconda/etc/profile.d/" >> ~/.profileconda init bash

#2. Install Mamba

conda install mamba -n base -c conda-forge -y
conda update conda -y
conda update --all
conda config --add channels defaults
conda config --add channels bioconda
conda config --add channels conda-forge

#3. Install R

mamba create -n R -c conda-forge r-base -y
conda activate R
mamba install -c conda-forge r-essentials

#4. Install gdebi

To install GDebi on your Ubuntu machine, run the following command:

sudo apt-get install gdebi

#5. Install Rstudio

For Ubuntu, download the Rstudio *.deb package from the official Rstudio website. Download respectively for other Operating Systems.

Use gdebi to install the deb package, The gdebi command will ensure that all additional prerequisites are also downloaded to fulfil the RStudio requirements:

sudo gdebi rstudio-1.2.5019-amd64.deb

#6 Running Rstudio

Use your desktop menu to start the RStudio application or you can start the application by executing the below command : rstudio.

But that’s not why we did all this. We want different versions of R in Rstudio. For that run the following command in a terminal:

conda activate R

This will activate the installed conda R, now enter the command:

This will make rstudio use the conda installed R.

How to Install Laravel in Ubuntu 22.04

 On this tutorial, we will install Laravel 9 in Ubuntu 22.04. Basically, you can replicate all of this step in many version and flavors of Ubuntu (like Kubuntu, Xubuntu, etc) and also all Debian-based distribution with binary compatibility version by Ubuntu.

We will utilize Composer as a dependency manager and package installer for PHP.
If you don't have composer installed, you can install it as easy as copy this script:

sudo apt install composer

or you can install completly bleding edge with the latest version:

sudo apt update
sudo apt install php8.0 php8.0-mbstring php8.0-xml php8.0-zip curl
curl -s | php
mv composer.phar /usr/local/bin/composer
chmod +x /usr/local/bin/composer

to install Laravel globally, you can type this command:

composer global require laravel/installer

the last step, add Laravel to bash environment.

nano ~/.bashrc
export PATH="$PATH:$HOME/.config/composer/vendor/bin"
source ~/.bashrc

And it's done, you can test by type laravel command


Tuesday, March 8, 2022

How to Install Database Replicator (ReplicaDB) on Debian 10/11

1. Install Java we can use OpenJDK 11. install it with: sudo apt install openjdk-11-jre openjdk-11-jdk export JAVA_HOME variable to bashrc: export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java)))) export PATH=$PATH:$JAVA_HOME/bin 2. Download ReplicaDB tool from Github wget extract it to /opt/replicadb tar -xzvf ReplicaDB*.tar.gz -C /opt/replicadb create symlink for the replicadb binary: ln -s /opt/replicadb/bin/replicadb /usr/local/bin/replicadb 3. Test replicadb --version

How to Install Certbot in CentOS 8 with Snapd

You can use Let's Encrypt to make your site more secure for free. To create a SSL we can use certbot. In CentOS or other linux distribution, we can install certbot with easy as is snap a finger, by installing snap.

Snaps are applications packaged with all their dependencies to run on all popular Linux distributions from a single build. They update automatically and roll back gracefully.

Snap is available for CentOS 7.6+, and Red Hat Enterprise Linux 7.6+, from the Extra Packages for Enterprise Linux (EPEL) repository. The EPEL repository can be added to your system with the following command:
sudo yum install epel-release
Snap can now be installed as follows:
sudo yum install snapd
Once installed, the systemd unit that manages the main snap communication socket needs to be enabled:
sudo systemctl enable --now snapd.socket
To enable classic snap support, enter the following to create a symbolic link between /var/lib/snapd/snap and /snap:
sudo ln -s /var/lib/snapd/snap /snap
Either log out and back in again, or restart your system, to ensure snap’s paths are updated correctly.

Install certbot 

 To install certbot, simply use the following command:
sudo snap install certbot --classic

How to Solve CentOS Error: 'AppStream': Cannot prepare internal mirrorlist: No URLs in mirrorlist

CentOS Linux 8 had reached the End Of Life (EOL) on December 31st, 2021. It means that CentOS 8 will no longer receive development resources from the official CentOS project. After Dec 31st, 2021, if you need to update your CentOS, you need to change the mirrors to where they will be archived permanently. Alternatively, you may want to upgrade to CentOS Stream.

Do with your own consequnces!

Step 1: Go to the /etc/yum.repos.d/ directory.

[root@autocontroller ~]# cd /etc/yum.repos.d/

Step 2: Run the below commands

[root@autocontroller ~]# sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
[root@autocontroller ~]# sed -i 's|#baseurl=|baseurl=|g' /etc/yum.repos.d/CentOS-*

Step 3: Now run the yum update

[root@autocontroller ~]# yum update -y

That’s it!