Wednesday, March 8, 2023

How to Create partition with pvcreate command

You can run the pvcreate command on the partition to initialize partitions rather than whole disks.

The following command initializes the partition /dev/sda1 and /dev/sdb1 as LVM physical volumes.

# pvcreate /dev/sda1 /dev/sdb1

Sample Output:

root@ubuntu-PC:~# pvcreate /dev/sda1 /dev/sdb1

  Physical volume "/dev/sda1" successfully created.

  Physical volume "/dev/sdb1" successfully created.


How to initialize a block device with pvcreate command in Linux

The following command initializes /dev/sdc  as a physical volume for later use by LVM logical volumes. Initialization is similar to formatting a file system.

# pvcreate /dev/sdc

Sample Output:

root@ubuntu-PC:~# pvcreate /dev/sdc

  Physical volume "/dev/sdb" successfully created.


How to Display Contents of a File in Linux (RHEL/Debian/CentOS/Ubuntu/Manjaro)

 Linux is a popular operating system used by many programmers, system administrators, and developers for its open-source nature and flexibility. As a user of Linux, you may find yourself needing to display the contents of a file in the command line interface.


This task can be accomplished using a variety of commands, each with its own specific purpose and usage. In this article, we will explore some of the most commonly used commands for displaying the contents of a file in Linux. We will also discuss how to interpret the output of these commands and provide some useful tips to help you make the most of your Linux experience.

Monday, March 6, 2023

How to Install NTP Server in CentOS/RHEL 7 with Chrony

This article provides instructions on how to set up a Chrony NTP server in CentOS/RHEL 7 servers. Chrony is similar to an NTP server, but it's designed for systems that are frequently powered down or disconnected from the network. The main configuration file for Chrony is located at /etc/chrony.conf, and the chronyd daemon runs in user space. Chrony is a command-line program that provides a command prompt to obtain system time information and current sources.

How to Create Tinted Status Bar (Coloured Status Bar)

Android users are certainly accustomed to the black-colored status bar and may now feel a little bored with it. Since the iPhone released a colored status bar that follows the application, some Android developers have tried to create applications to change the color of the status bar according to the color of the application you are using. One of the circulating tinted status bar applications is "Tinted Status Bar" and "Flat Style Colored Bars", which are modules of the Xposed framework.

To use these applications, your Android device must be rooted first. If your device is already rooted and you want to try making the status bar on your Android more colorful, please follow the tutorial below:

How to Integrate APM (Application Performance Monitoring) Into Golang Server



To integrate APM (application performance monitoring) into Golang servers, you can use a third-party library such as Elastic APM or DataDog APM. Here are the basic steps you can follow:

  • Choose an APM library: There are many APM libraries available for Golang, so choose one that fits your requirements. Some popular options include Elastic APM, DataDog APM, and Instana.
  • Install the library: Once you have chosen an APM library, you will need to install it in your Golang project. This usually involves importing the library into your code and setting it up to monitor your application.
  • Configure the library: You will need to configure the APM library to monitor the specific parts of your application that you are interested in. This might include setting up tracing for specific HTTP endpoints, monitoring database queries, or tracking the performance of specific functions.
  • Start the APM agent: Once you have installed and configured the APM library, you will need to start the APM agent that will collect and transmit performance data to your APM server. This usually involves starting the agent as a separate process or thread in your application.
  • View performance data: With the APM agent running, you can now view performance data for your Golang application in your APM dashboard. This might include metrics such as request latency, error rates, and database query performance.
Overall, integrating APM into Golang servers is a great way to monitor the performance of your applications and identify potential issues before they become critical. By following these steps, you can quickly and easily set up APM monitoring for your Golang projects.

Adding The APM agent to our server

To add apm-agent-go package, download this package we can simply run:

go get go.elastic.co/apm/v2



Then add to API handler:

import (
 "context"
 "encoding/json"
 "go.elastic.co/apm/module/apmhttp/v2"
 "go.elastic.co/apm/v2"
 "net/http"
 "time"
)


2. wrap your server mux with apmhttp wrapper



http.ListenAndServe(":8080", apmhttp.Wrap(mux))



3. add an APM span for each function -

func baseHandler(w http.ResponseWriter, r *http.Request) {
 ctx := r.Context()
 span, ctx := apm.StartSpan(ctx, "baseHandler", "custom")
 defer span.End()
 // rest of the code
}

func processingRequest(ctx context.Context) {
 span, ctx := apm.StartSpan(ctx, "processingRequest", "custom")
 defer span.End()
 // rest of the code
}

func doSomething(ctx context.Context) {
 span, ctx := apm.StartSpan(ctx, "doSomething", "custom")
 defer span.End()
 // rest of the code
}


func getTodoFromAPI(ctx context.Context) (map[string]interface{}, error) {
 span, ctx := apm.StartSpan(ctx, "getTodoFromAPI", "custom")
 defer span.End()
 // rest of the code
}


Elastic APM agent traces a request transaction using Go’s context, so we are required to pass the original context while starting the span and use the returned context for further spans.



4. Final step is to let APM agent know where is APM server running. We do this by setting 2 environment variables ELASTIC_APM_SERVICE_NAME and ELASTIC_APM_SERVER_URL.

In our main function, add -


const (
 apmServer = "http://localhost:5200"
 apmName = "test-apm-1"
)

func main() {
 os.Setenv("ELASTIC_APM_SERVICE_NAME", apmName)
 os.Setenv("ELASTIC_APM_SERVER_URL", apmServer)
 mux := http.NewServeMux()
 mux.HandleFunc("/", baseHandler)
 http.ListenAndServe(":8083", apmhttp.Wrap(mux))
}

this is the complete source code for a Golang server with APM:

package main

import (
 "context"
 "encoding/json"
 "go.elastic.co/apm/module/apmhttp/v2"
 "go.elastic.co/apm/v2"
 "net/http"
 "os"
 "time"
)


const (
 apmServer = "http://localhost:7200"
 apmName = "test-apm-1"
)


func main() {
 os.Setenv("ELASTIC_APM_SERVICE_NAME", apmName)
 os.Setenv("ELASTIC_APM_SERVER_URL", apmServer)
 mux := http.NewServeMux()
 mux.HandleFunc("/", baseHandler)
 http.ListenAndServe(":8083", apmhttp.Wrap(mux))
}


func baseHandler(w http.ResponseWriter, r *http.Request) {
 ctx := r.Context()
 span, ctx := apm.StartSpan(ctx, "baseHandler", "custom")
 defer span.End()
 processingRequest(ctx)
 todo, err := getTodoFromAPI(ctx)
 if err != nil {
  w.Write([]byte(err.Error()))
  return
 }
 data, _ := json.Marshal(todo)
 w.Write(data)
}



func processingRequest(ctx context.Context) {
 span, ctx := apm.StartSpan(ctx, "processingRequest", "custom")
 defer span.End()
 doSomething(ctx)
 // time sleep simulate some processing time

 time.Sleep(15 * time.Millisecond)
 return
}



func doSomething(ctx context.Context) {
 span, ctx := apm.StartSpan(ctx, "doSomething", "custom")
 defer span.End()
 // time sleep simulate some processing time
 time.Sleep(20 * time.Millisecond)
 return
}



func getTodoFromAPI(ctx context.Context) (map[string]interface{}, error) {
 span, ctx := apm.StartSpan(ctx, "getTodoFromAPI", "custom")
 defer span.End()
 var result map[string]interface{}
 resp, err := http.Get("https://jsonplaceholder.typicode.com/todos/1")
 if err != nil {
  return result, err
 }

 defer resp.Body.Close()
 err = json.NewDecoder(resp.Body).Decode(&result)
 if err != nil {
  return result, err
 }

 return result, err
}

what is stack and heap in Rust language?

In Rust (like many programming languages), memory is divided into two main parts: the stack and the heap.

The stack is a region of memory that is used to store function call frames and local variables. The stack is fast and efficient because data is stored and accessed in a Last-In-First-Out (LIFO) order, which means that when a function returns, its frame is immediately popped off the top of the stack. In Rust, stack-allocated values are stored on the stack and are automatically cleaned up when they go out of scope.