bash Archives - ProdSens.live https://prodsens.live/tag/bash/ News for Project Managers - PMI Wed, 03 Jul 2024 15:21:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png bash Archives - ProdSens.live https://prodsens.live/tag/bash/ 32 32 amber: writing bash scripts in amber instead. pt. 4: functions https://prodsens.live/2024/07/03/amber-writing-bash-scripts-in-amber-instead-pt-4-functions/?utm_source=rss&utm_medium=rss&utm_campaign=amber-writing-bash-scripts-in-amber-instead-pt-4-functions https://prodsens.live/2024/07/03/amber-writing-bash-scripts-in-amber-instead-pt-4-functions/#respond Wed, 03 Jul 2024 15:21:30 +0000 https://prodsens.live/2024/07/03/amber-writing-bash-scripts-in-amber-instead-pt-4-functions/ amber:-writing-bash-scripts-in-amber-instead-pt.-4:-functions

a while ago, i blogged about uploading files to s3 using curl and provided the solution as two…

The post amber: writing bash scripts in amber instead. pt. 4: functions appeared first on ProdSens.live.

]]>
amber:-writing-bash-scripts-in-amber-instead-pt.-4:-functions

a while ago, i blogged about uploading files to s3 using curl and provided the solution as two functions written in bash, and basically the only feedback was “wait. you can write functions in bash?”

you can. but in reality, you probably don’t want to. the syntax for defining (and calling) bash functions is horrible. there’s a reason people genrally don’t use them.

writing and using functions in amber, by comparison, is a borderline delight of sanity. if you’ve ever written a function in php or python or javascript, amber’s function syntax will feel familiar and, well, ‘normal’.

community disaster memea look at the syntax of functions in bash

a simple function to start

let’s start with a basic hello-world calibre function:

fun hello() {
    echo "hello"
}

hello()

functions are defined with the keyword fun to keep things nice and terse, and have a body inside of braces. very comortable, standard stuff. when we call this function, the string ‘hello’ gets printed to STDOUT.

accepting arguments

arguments can be passed to a function as a comma-separated list. again, no surprises here.

fun personalized_hello(name) {
    echo "hello {name}"
}

personalized_hello("gbhorwood")

note here that we’re using string interpolation in our echo statement to output our variable.

return statements

we can return a value from a function using the return statement, just as we would expect.

fun get_personalized_hello(name) {
    return "hello {name}"
}

echo personalized_hello("gbhorwood")

a little bit of type safety

everybody loves some type safety in their programming language, and amber obliges us by accepting optional types for both arguments and return values.

fun sum(a: Num, b: Num): Num {
    return a + b
}

types are defined using the colon syntax.

amber has five types:

  • Text: strings, basically.
  • Num: either integers or floats.
  • Bool: the standard true or false
  • Null: the nothing type. amber uses Null as a return types for functions that do not return values.
  • []: the array type. in amber, arrays cannot contain mixed types, so the type definition for an array includes the type of the array’s elements. if we want to define an array of numbers, for instance, we would type it is [Num].

important note: if we define one type in a function, we have to define all the types. for instance, we cannot define just the argument type without also defining the return type. not setting a return type of Null here throws an error.

// this errors because there is no return type
fun say_my_name(name: Text) {
    echo name
}

say_my_name("gbhorwood") // WARN Function has typed arguments but a generic return type

we would fix this error by writing our function as:

// this works
fun say_my_name(name: Text): Null {
    echo name
}

likewise, if we define the type of one argument, we have to define them all:

// this error because last_name has no type
fun say_my_name(first_name: [Text], last_name): Null {
    echo "hello"
}

say_my_name("grant", "horwood") // ERROR Function 'say_my_name' has a mix of generic and typed arguments

and, of course, if we define a type, we have to obey it.

fun say_my_name(name: Text) {
    echo name
}

say_my_name(9) // 1st argument 'name' of function 'say_my_name' expects type 'Text', but 'Num' was given

throwing errors with fail

functions in amber can ‘throw’ an error by using the fail statement with an exit code. in this example, we want our function to fail if the user is not root.

fun root_only_function() {
    unsafe if($whoami$ != "root") {
        fail 1
    }

    echo "only root can do this"
}

in the first installment, we covered handling bash errors using the failed block. we can ‘catch’ the errors ‘thrown’ by fail the same way.

root_only_function() failed {
    echo "failed condition"
}

likewise, we can also ignore the errors we fail from our functions by using unsafe.

a while ago, i blogged about uploading files to s3 in bash and provided the solution as two functions written in bash, and basically the only feedback was “wait. you can write functions in bash?”

you can. but in reality, you probably don’t want to. the syntax for defining (and calling) bash functions is horrible. there’s a reason people genrally don’t use them.

writing and using functions in amber, by comparison, is a borderline delight of sanity. if you’ve ever written a function in php or python or javascript, amber’s function syntax will feel familiar and, well, ‘normal’.

a simple function to start

let’s start with a basic hello-world calibre function:

unsafe root_only_function()

trapping failed cases in our functions

we can also, of course, handle errors from commands by using the failed block inside our functions. this function, for example, attempts a shell command and, on failure, throws it’s own fail.

fun failing_touch() {
    silent $touch /etc/passwd$ failed {
        fail 1
    }
}

failing_touch() failed {
    echo "function failing_touch failed"
}

note that we applied silent to our shell command to suppress bash’s output. we only want users to see our error messages, not the shell’s.

pushing failed cases up to our function call

trapping an error and throwing an explicit fail is a bit clumsy. amber also allows us to automatically fail up to our to where our function is called by replacing the failed block in our function with ?.

fun failing_touch() {
    silent $touch /etc/passwd$?
}

failing_touch() failed {
    echo "function failing_touch failed"
}

in this example, our function, when called, fails exactly the same way as it would if we’d called $touch /etc/passwd$ directly. very handy.

conclusion

this series has covered calling shell commands; handling errors; composing if statements and writing loops; using the convenience commands in the standard library; and writing functions. is that all amber can do? no. but it is certainly enough for us to start using this language to do useful, meaningful things.

a note about vim

writing code in vim is a joyful thing (or, at least, that’s my opinion), but not having syntax highlighting in this modern day and age is intolerable, so i composed an amber syntax file for vim. i’ve never written a syntax file before and the effort there is clearly sophomoric, but it does work.

šŸ”Ž this post was originally written in the grant horwood technical blog

The post amber: writing bash scripts in amber instead. pt. 4: functions appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/07/03/amber-writing-bash-scripts-in-amber-instead-pt-4-functions/feed/ 0
0x00. Shell, navigation https://prodsens.live/2024/06/08/0x00-shell-navigation/?utm_source=rss&utm_medium=rss&utm_campaign=0x00-shell-navigation https://prodsens.live/2024/06/08/0x00-shell-navigation/#respond Sat, 08 Jun 2024 23:20:26 +0000 https://prodsens.live/2024/06/08/0x00-shell-navigation/ 0x00.-shell,-navigation

File System Organization Like Windows, the files on a Linux system are arranged in what is called a…

The post 0x00. Shell, navigation appeared first on ProdSens.live.

]]>
0x00.-shell,-navigation

File System Organization

Like Windows, the files on a Linux system are arranged in what is called a hierarchical directory structure. This means that they are organized in a tree-like pattern of directories (called folders in other systems), which may contain files and subdirectories. The first directory in the file system is called the root directory. The root directory contains files and subdirectories, which contain more files and subdirectories and so on and so on.
The basic three commands include:

  1. pwd (print working directory)

  2. cd (change directory)

  3. ls (list files and directories).

pwd

The directory we are standing in is called the working directory. To see the name of the working directory, we use the pwd command.

[me@linuxbox me]$ pwd
/home/me

When we first log on to our Linux system, the working directory is set to our home directory.

cd

To change the working directory (where we are standing in the maze) we use the cd command. To do this, we type cd followed by the pathname of the desired working directory. A pathname is the route we take along the branches of the tree to get to the directory we want.

me@linuxbox me]$ cd /usr/bin
me@linuxbox bin]$ pwd
/usr/bin

If we type cd followed by nothing, cd will change the working directory to our home directory.

me@linuxbox me]$ cd
me@linuxbox bin]$ pwd
/home/me

A related shortcut is to type cd ~user_name. In this case, cd will change the working directory to the home directory of the specified user.

me@linuxbox me]$ cd ~me
me@linuxbox bin]$ pwd
/home/me

Typing cd - or cd .. changes the working directory to the previous one.

me@linuxbox me]$ cd /usr/bin
me@linuxbox bin]$ pwd
/usr/bin
me@linuxbox me]$ cd ..
me@linuxbox bin]$ pwd
/usr

ls

It is used to list the files in the current working directory.

[me@linuxbox me]$ ls
Desktop   Download  Pictures  Music  Templates  Documents examples.desktop    Public  Videos

File names that begin with a period character are hidden. This only means that ls will not list them unless we say ls -a.

[me@linuxbox me]$ ls -la
.git/    .ssh/   .ipython/ Desktop   Download  Pictures  Music  Templates

ls -lList the files in the working directory in long format

[me@linuxbox me]$ ls -l
drwxr-xr-x 1 me 197121  0 Oct 17  2023  OneDrive/
drwxr-xr-x 1 me 197121  0 Jan 17  2023  Pictures/
drwxr-xr-x 1 me 197121  0 Mar  3  2023  Saved Games/
drwxr-xr-x 1 me 197121  0 Apr 27  2023  Searches/

Displaying file contents

There are several commands to display the content of a file in Linux.

  • Using cat command
$ cat filename
  • Using head and tail commands

The head command displays the first 10 lines of a file, while the tail command displays the last 10 lines of a file.

$ head filename   # displays the first 10 lines of a file
$ tail filename   # displays the last 10 lines of a file

You can modify the number of lines displayed by using the -n option, for example:

$ head -n 5 filename   # displays the first 5 lines of a file
$ tail -n 5 filename   # displays the last 5 lines of a file
  • Using less command

The less command allows you to view a file one page at a time. It allows you navigate through the file using the arrow keys or page up/down keys.

$ less filename
  • Using awk command

This command uses awk to print each line of the file.

$ awk '1' filename

Creating files and directories

Create a file:

  • Using the touch command:
$ touch filename

This will create a new empty file with the specified name.

  • Using a text editor:
$ nano filename # using the nano editor.
$ vi filename # using vim editor.
$ code filename # using vscode editor.

This will open a text editor where you can create and edit the file. Once you’re done, save and exit the editor.

  • Using the echo command:
$ echo "Hello World!" > filename

This will create a new file with the specified name and add the text “Hello World!” to it.
Create a directory:

  • Using the mkdir command:
$ mkdir directoryname

This will create a new directory with the specified name.

Removing a file or directory

Removing a file:
To remove a file, use the rm command followed by the name of the file you want to remove:

$ rm filename

If the file is write-protected, rm will ask you to confirm the deletion. To remove the file without prompting, use the -f option:

$ rm -f filename

Removing a directory:
To remove an empty directory, use the rmdir command followed by the name of the directory:

$ rmdir directoryname

If the directory is not empty, you will get an error message. To remove a non-empty directory and all its contents, use the rm command with the -r option:

$ rm -r directoryname

Moving or Copying a file or directory

Moving a file or directory:
To move a file or directory, use the mv command followed by the source file or directory and the destination:

$ mv source destination

Renaming a file or directory:
To rename a file or directory, use the mv command with the source file or directory and the new name:

$ mv oldname newname

Copying a file or directory:
To copy a file or directory, use the cp command followed by the source file or directory and the destination:

$ cp source destination

To copy a directory and all its contents, use the -r option with the cp command:

$ cp -r source destination

This will copy the entire directory source and all its contents to destination.
Using rsync command:
The rsync command is a powerful tool for copying and synchronizing files and directories. It can be used to copy files and directories while preserving permissions, timestamps, and other attributes:

$ rsync -avz sourceDir destinationDir

This will copy the entire directory sourceDir and all its contents to the specified destination, preserving permissions, timestamps, and other attributes.

Thanks for your time! Please leave a comment and any suggestions are welcome. Follow me to get updates.

The post 0x00. Shell, navigation appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/08/0x00-shell-navigation/feed/ 0
šŸ‘¾ Using Arguments in Bash Scripts https://prodsens.live/2024/05/27/%f0%9f%91%be-using-arguments-in-bash-scripts/?utm_source=rss&utm_medium=rss&utm_campaign=%25f0%259f%2591%25be-using-arguments-in-bash-scripts https://prodsens.live/2024/05/27/%f0%9f%91%be-using-arguments-in-bash-scripts/#respond Mon, 27 May 2024 07:20:39 +0000 https://prodsens.live/2024/05/27/%f0%9f%91%be-using-arguments-in-bash-scripts/ -using-arguments-in-bash-scripts

Introduction Arguments in any bash script are inevitable for any scripting task. They make the script flexible and…

The post šŸ‘¾ Using Arguments in Bash Scripts appeared first on ProdSens.live.

]]>
-using-arguments-in-bash-scripts


refine repo

Introduction

Arguments in any bash script are inevitable for any scripting task. They make the script flexible and dynamic instead of static and hard coded. Now there are many variations in how arguments can be used effectively in a script, and this is exactly what we will discuss today. Remember, a solid understanding of arguments is crucial to automate your tasks through script arguments. For each point in this article, we will provide an example from a practical perspective as well.

Let’s start with understanding how positional parameters work in the bash script.

Steps to be covered:

  • Understanding Positional Parameters
  • Using Special Parameters
  • Implementing Flags and Options
  • Handling Variable Numbers of Arguments

    • Best Practices for Bash Script Arguments

Understanding Positional Parameters

In bash scripting, positional parameters are a fundamental concept. Theyā€™re the variables that bash scripts use to handle input data. When you run a script, you can pass arguments to it, and these arguments are stored in special variables known as positional parameters. The first argument you pass is stored in $1, the second in $2, and so on.

Letā€™s understand this in detail through an example. Let’s say you have a bash script that needs to process three pieces of input data and you want to make use of positional parameters. The below snippet shows how you might use positional parameters to handle this:

#!/bin/bash
echo "Arg 1: $1"
echo "Arg 2: $2"
echo "Arg 3: $3"

When you run this script with three arguments, it will echo back the first three arguments you passed to it. For instance, if you run ./myscript.sh marketing sales engineering, the script will output:

Arg 1: marketing
Arg 2: sales
Arg 3: engineering

This shows how $1, $2, and $3 correspond to the first, second, and third arguments you passed to the script. It is a simple yet powerful way to make your scripts more flexible and reusable.

Using Special Parameters

In bash scripting, there are special parameters that provide additional ways to handle input data. These include $*, $@, and $#.

The $* and $@ parameters represent all arguments that were passed to the script. While they might seem identical, their behavior diverges when you try to iterate over them in a script. Letā€™s illustrate this with an example:

#!/bin/bash
echo "Iterating with $*"
for arg in "$*"
do
    echo $arg
done

echo "Iterating with $@"
for arg in "$@"
do
    echo $arg
done

If you run this script with the arguments ./myscript.sh one two three, youā€™ll notice that $* treats all arguments as a single string, while $@ treats each argument as a separate string.

The $# parameter is different – it doesnā€™t represent the arguments themselves, but the number of arguments. This can be useful when your script needs to know how many arguments were passed. Hereā€™s a simple script that uses $#:

#!/bin/bash
echo "You provided $# arguments."

If you run ./myscript.sh apple banana cherry, the script will output You provided 3 arguments. This shows how $# can be used to count the number of arguments passed to a script.

Implementing Flags and Options

Bash scripts often require input parameters to customize behavior, and getopts is a utility that can be used to parse positional parameters.

#!/bin/bash

# Initialize our own variables
OPTIND=1         # Reset in case getopts has been used previously in the shell.
verbose=0
name=""

while getopts "h?vn:" opt; do
    case "$opt" in
    h|?)
        echo "Usage: $0 [-v] [-n name]"
        exit 0
        ;;
    v)  verbose=1
        ;;
    n)  name=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, name='$name', Leftovers: $@"

In the script above, -h is used for displaying help information, and -n is used for setting a name. The v flag is used to set verbose mode. If -v is provided when the script is run, verbose is set to 1. If -n is provided, the next argument is assigned to the variable name.

Hereā€™s an example of how you might run this script:

$ ./myscript -v -n "Example Name" leftover args

Output:

verbose=1, name='Example Name', Leftovers: leftover args

In this example, the -v flag sets verbose mode, and -n sets the name to ā€œExample Nameā€. Any arguments provided after the flags (in this case, ā€œleftover argsā€) are still available in the script.

Handling Variable Numbers of Arguments

Bash scripts often need to accept a variable number of arguments. This is where $@ comes into play. Itā€™s a special shell variable that holds all the arguments provided to the script.

#!/bin/bash

# Initialize an empty string
concatenated=""

# Loop through all arguments
for arg in "$@"; do
    concatenated+="$arg "
done

# Print the concatenated string
echo "Concatenated string: $concatenated"

In the script above, we initialize an empty string concatenated. We then loop through all arguments provided to the script using $@ and append each argument to concatenated.

Hereā€™s an example of how you might run this script:

$ ./myscript arg1 arg2 arg3

Output:

Concatenated string: arg1 arg2 arg3

In this example, the script concatenates the three arguments arg1, arg2, and arg3 into a single string. This demonstrates how a bash script can handle a variable number of arguments.

Best Practices for Script Arguments

Here are some best practices for designing bash scripts with arguments:

  • Use Intuitive Argument Names: Opt for descriptive and intuitive names for arguments. This improves readability and helps maintain the code.

    • Bad: bash script.sh $1 $2
    • Good: bash script.sh -u username -p password
  • Assign Default Values: Where practical, assign default values to arguments. This ensures that your script behaves predictably even when certain inputs are omitted.

    • Example: file_path=${1:-"https://dev.to/default/path"}
  • Inline Comments: Use inline comments to explain the purpose and expected values of arguments. This documentation aids future maintainers and users of your script.

    • Example: # -u: Username for login
  • Leverage getopts for Option Parsing: getopts allows for more flexible and robust argument parsing, supporting both short and long options.

    • Example:
while getopts ":u:p:" opt; do
  case ${opt} in
    u ) username=$OPTARG;;
    p ) password=$OPTARG;;
    ? ) echo "Usage: cmd [-u] [-p]";;
  esac
done
  • Validate Input Early: Check for the existence and format of required arguments at the start of your script to prevent execution with invalid inputs.

    • Example:
if [ -z "$username" ] || [ -z "$password" ]; then
  echo "Username and password are required."
  exit 1
fi
  • Beware of Unquoted Variables: Always quote variables to handle values with spaces correctly.

    • Bad: if [ -z $var ]; then
    • Good: if [ -z "$var" ]; then
  • Explicitly Declare Intent: Use set -u to treat unset variables and parameters as an error, preventing scripts from running with unintended states.

    • Add set -u at the beginning of your script.

Conclusion

The importance of arguments in developing scripts that can adapt to different situations is highlighted by the fact that they are extensively used in bash scripts. We focused on improving script functionality and user interaction by using positional parameters, special variables, and getopts.

Not only do the given examples provide a useful roadmap, but they also inspire developers to try new things and incorporate these ideas into their scripts. Your scripting skills will certainly improve after adopting these best practices and techniques, allowing you to make your automation tasks more efficient and adaptable.

The post šŸ‘¾ Using Arguments in Bash Scripts appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/05/27/%f0%9f%91%be-using-arguments-in-bash-scripts/feed/ 0
Building a Static Site Generator in 3 steps https://prodsens.live/2024/05/02/building-a-static-site-generator-in-3-steps/?utm_source=rss&utm_medium=rss&utm_campaign=building-a-static-site-generator-in-3-steps https://prodsens.live/2024/05/02/building-a-static-site-generator-in-3-steps/#respond Thu, 02 May 2024 12:20:37 +0000 https://prodsens.live/2024/05/02/building-a-static-site-generator-in-3-steps/ building-a-static-site-generator-in-3-steps

Maybe you know this situation: You are not so happy anymore with your personal website, the look and…

The post Building a Static Site Generator in 3 steps appeared first on ProdSens.live.

]]>
building-a-static-site-generator-in-3-steps

Maybe you know this situation: You are not so happy anymore with your personal website, the look and feel no longer represents you well enough and the time has come for a relaunch. I’ve been at this point recently and asked myself:

What do I actually really need?

I’ve been through a bunch for CMSes, always using my own website as playground for new technologies. That isn’t a bad or uncommon approach for a web dev, but I wanted to reduce complexity and maintenance effort this time. And then I had this most brilliant, completely new, unheard before, genius idea:

What if I’d just write vanilla HTML?! (ba dum, tss!)

My head was spinning. That’d mean tiny loading times, no server side rendering or precompiling and best of all: No dependencies hence no build step! I was happy like a kid the day before Christmas. I felt like I had just made the invention of the century.

But then came the disillusionment: What about computed data? Or data retrieved per API? I usually showed some GitHub and DEV post stats on my website. Doing it with JavaScript would result in additional request on page load again. And I most certainly wouldn’t want to change my age manually every year (okay, that one wouldn’t be too bad if I forgot šŸ˜…). My brain was frantically searching for a way out of this to maintain its latest achievements. It finally had to realize, that at least one single layer of data retrieval and insertion was needed. But would that be possible without any dependencies?

It turned out, that I basically wanted a tiny Static Site Generator and I could have chosen one of the already existing great solutions out there. But sometimes, you just want to stay dependency free, keep things simple and and at the same time up-to-date and running long-term without having to worry about them.

I broke it down to 3 steps:

  1. Create the markup: Do the actual webdesign work. Write HTML and CSS with all the content I want.
  2. Retrieve and process the data: Write a script that calls all APIs, does all data manipulation I need and inserts it into my content.
  3. Automate it: Call the script from 2. automatically to keep the content up-to-date

Let’s visualize that:

Schema of a static site generator

Still interested? Then let’s get our hands dirty.

1. Creating the markup

# create the template file
touch template.html

Ahh, feels good to write good ol’ plain HTML again. I’m sure every IDE has a shortcut for a basic HTML structure. For VS Code, just create an empty HTML file, type ! and hit TAB.

This one might be a bit more work than clicking “Install” on the next popular WordPress theme. But personally I enjoy writing HTML and CSS and building small websites from scratch like this, giving it a personal touch. I even decided not to use JavaScript but that was more of a personal challenge and totally not necessary.

If you don’t want to start from scratch, there are a lot of good vanilla HTML templates out there, e.g. HTML5up. For now, let’s use this example:



 lang="en">

   charset="UTF-8">
   name="viewport" content="width=device-width, initial-scale=1.0">
  </span>My Website<span class="nt">


  
I'm a 36 y/o software engineer.

I love Open Source.

I created 123 PRs on GitHub.

2. Retrieving and Processing the data

# create the script file
touch build.sh

Now it gets interesting. To retrieve and manipulate data I decided to simply use the Bash. It is available on nearly every Linux distro and has a powerful language which should be sufficient to retrieve data and put it into our HTML file. So for our example, a possible build script could look like this:

# build.sh

GH_API_URL='https://api.github.com/graphql'
GITHUB_TOKEN='abc_AB12CD34...'

# call the GitHub API via curl
QUERY='{ viewer { pullRequests { totalCount } } }'
RESULT=$(curl -s -i -H 'Content-Type: application/json' -H "Authorization: bearer $GITHUB_TOKEN" -X POST -d "{"query": "query $QUERY"}" $GH_API_URL | tail -n 1)

# get the data we want
let AGE=(`date +%s`-`date +%s -d 1987-06-05`)/31536000
PR_COUNT=$(echo $RESULT | sed -r 's|.*"totalCount":([0-9]*).*|1|g')

What’s happening here? We’re calling the GitHub GraphQL API by using curl and storing the json response in $RESULT. Note that you’ll need an access token, which you can generate in your GitHub settings. Since we get JSON with only one totalCount key, we can extract the number that follows that key with sed and a little regex. Also you can use let to assign a calculation directly to a variable, here an age calculated from a given date.

The last thing that’s missing now, is to insert the data into our template. I decided to just use common template variable notation {{...}} (of course you can choose whatever you like) and modified the template.html like this:


...
  
I'm a {{age}} y/o software engineer.

I love Open Source.

I created {{pr_count}} PRs on GitHub.

...

To replace them, we let the script copy our template and just use sed with some replacement regex again:

# build.sh
...
cp template.html index.html
sed -i -e "s|{{age}}|$AGE|g;s|{{pr_count}}|$PR_COUNT|g" index.html

Et voilĆ ! We now have a ready-to-be-served index.html containg a computed age and API retrieved pull request count.

3. Configuration and automation

Let’s now improve our bash script and make it actually configurable. You might have noticed that e.g. the GitHub token and the birth date both were just hard-coded into the script. A much better approach especially for sensitive data would be, to hold all config in a separate file. I decided to use a simple .env file, but you can use whatever suits your case:

# create a config file
touch .env
# .env
BIRTH_DATE=1987-06-05
GITHUB_TOKEN=ghp_ABCDEFGHIK123456789

To load this configuration into the bash script, you can simply source it. That way, all config variables automatically become bash variables:

# build.sh
source .env
...
let AGE=(`date +%s`-`date +%s -d $BIRTH_DATE`)/31536000
...

Now that we have an HTML template and a configurable Bash script that generates a servable index.html, we can finally execute that scriptā€”how and as often* as we like. You can run it manually, but you might as well automate the execution e.g. with a cron job or using GitHub actions. This flexibility is a huge advantage if you e.g. have to move your website to another server.

* Well, not limitless since there are limitations to the number of API calls per time. Just keep the repetition reasonably, e.g. I decided to call it once every 10 minutes.

Wrapping it up

So what we did here was creating a very basic and simple static site generator. Let’s have a last look at the pros and cons of this approach in diff style:

+ Lightning fast, no blockers/requests on or after page load
+ Easy to maintain, no npm/composer update etc.
+ Flexible and (almost) tech and location independent
- Might be hard for some people to create/find a HTML template
- Not exactly beginner-friendly, requires knowledge of command line and handling raw data
- Might become less maintainable with a lot of pages

After diving into this, I can say it’s still the best choice for my use case (single page website for a dev loving command line). If you want to have a look at my shiny new generated website, feel free:

https://devmount.com

And of course it’s open source! Would be an honor if you use it as a template for your next little project:

https://github.com/devmount/devmount.com

You have something to add, need some explanation or found a critical aspect I didn’t think of? Please let me know in the comments.

For convenience, here are the complete example files, if you’d like to fiddle around a bit with it:



 lang="en">

   charset="UTF-8">
   name="viewport" content="width=device-width, initial-scale=1.0">
  </span>My Website<span class="nt">


  
I'm a {{age}} y/o software engineer.

I love Open Source.

I created {{pr_count}} PRs on GitHub.

# .env
BIRTH_DATE=1987-06-05
GITHUB_TOKEN=ghp_ABCDEFGHIK123456789
# build.sh
source .env

GH_API_URL='https://api.github.com/graphql'

# call the GitHub API via curl
QUERY='{ viewer { pullRequests { totalCount } } }'
RESULT=$(curl -s -i -H 'Content-Type: application/json' -H "Authorization: bearer $GITHUB_TOKEN" -X POST -d "{"query": "query $QUERY"}" $GH_API_URL | tail -n 1)

# get the data we want
let AGE=(`date +%s`-`date +%s -d $BIRTH_DATE`)/31536000
PR_COUNT=$(echo $RESULT | sed -r 's|.*"totalCount":([0-9]*).*|1|g')

# generate website and replace template variables
cp template.html index.html
sed -i -e "s|{{age}}|$AGE|g;s|{{pr_count}}|$PR_COUNT|g" index.html

Published: 2nd May 2024

The post Building a Static Site Generator in 3 steps appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/05/02/building-a-static-site-generator-in-3-steps/feed/ 0
How To Change Default Python On A Linux Machine https://prodsens.live/2024/01/04/how-to-change-default-python-on-a-linux-machine/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-change-default-python-on-a-linux-machine https://prodsens.live/2024/01/04/how-to-change-default-python-on-a-linux-machine/#respond Thu, 04 Jan 2024 00:24:11 +0000 https://prodsens.live/2024/01/04/how-to-change-default-python-on-a-linux-machine/ how-to-change-default-python-on-a-linux-machine

Let’s say you have installed Python into the following folder /home/ubuntu/Python-3.10.13 To set it default python version open…

The post How To Change Default Python On A Linux Machine appeared first on ProdSens.live.

]]>
how-to-change-default-python-on-a-linux-machine

Let’s say you have installed Python into the following folder

/home/ubuntu/Python-3.10.13

To set it default python version open a terminal

Execute the following commands

echo 'export PATH="https://dev.to/home/ubuntu/Python-3.10.13:$PATH"' >> ~/.bashrc

source ~/.bashrc

Then with the following command you should see 3.10.13 as default

python --version

This above will make it only temporarily for that terminal session

Execute below ones for permanent on all terminals

echo 'export PATH=/home/ubuntu/Python-3.10.13:$PATH' >> ~/.bash_profile
echo 'export PATH=/home/ubuntu/Python-3.10.13:$PATH' >> ~/.profile
echo 'export PATH=/home/ubuntu/Python-3.10.13:$PATH' | sudo tee -a /etc/environment
echo 'export PATH=/home/ubuntu/Python-3.10.13:$PATH' | sudo tee -a /etc/profile.d/custom.sh

Image description

The post How To Change Default Python On A Linux Machine appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/04/how-to-change-default-python-on-a-linux-machine/feed/ 0
To Code and Beyond: A Neverland Adventure in Bash, Lua, Python, and Rust https://prodsens.live/2023/12/23/to-code-and-beyond-a-neverland-adventure-in-bash-lua-python-and-rust/?utm_source=rss&utm_medium=rss&utm_campaign=to-code-and-beyond-a-neverland-adventure-in-bash-lua-python-and-rust https://prodsens.live/2023/12/23/to-code-and-beyond-a-neverland-adventure-in-bash-lua-python-and-rust/#respond Sat, 23 Dec 2023 15:24:49 +0000 https://prodsens.live/2023/12/23/to-code-and-beyond-a-neverland-adventure-in-bash-lua-python-and-rust/ to-code-and-beyond:-a-neverland-adventure-in-bash,-lua,-python,-and-rust

Prologue: Departure to Neverland Once upon a time, in the mystical world of terminals, we find ourselves tagging…

The post To Code and Beyond: A Neverland Adventure in Bash, Lua, Python, and Rust appeared first on ProdSens.live.

]]>
to-code-and-beyond:-a-neverland-adventure-in-bash,-lua,-python,-and-rust

ship

Prologue: Departure to Neverland

Once upon a time, in the mystical world of terminals, we find ourselves tagging along on a Peter Pan-themed odyssey, soaring across the skies of Neverland. Our quest? To discover the magic similarities and differences of the unique spirits of Bash, Lua, Python, and Rust.

Chapter 1: Aboard the Jolly Roger with Bash and Lua

Under the moonlit sky, we join Captain Hook on the Jolly Roger, navigating the traditional seas of Bash and Lua. Like seasoned sailors chanting an age-old shanty, these languages use for loops as their trusted compass.

Bash – The Captain’s Command:

#!/bin/bash

sum_odd_int_array() {
    local sum=0
    for i in "$@"; do
        if (( i % 2 != 0 )); then
            (( sum+=i ))
        fi
    done
    echo $sum
}

array=(1 2 3 4 5)
echo $(sum_odd_int_array "${array[@]}")

Lua – The First Mate’s Chant:

function sum_odd_int_array(array)
    local sum = 0
    for _, v in ipairs(array) do
        if v % 2 ~= 0 then
            sum = sum + v
        end
    end
    return sum
end

local array = {1, 2, 3, 4, 5}
print(sum_odd_int_array(array))

peter

Chapter 2: Tinker Bell’s Python Whispers

In the heart of Neverland, Tinker Bell whisks us away, revealing the wonders of Python. She shows us two paths: one trodden by many, and the other, a secret trail lit by her magical glow.

The Beaten Path – Traditional For Loop:

def sum_odd_int_array_for_loop(array: list[int]) -> int:
    sum = 0
    for x in array:
        if x % 2 != 0:
            sum += x
    return sum

array = [1, 2, 3, 4, 5]
print(sum_odd_int_array_for_loop(array))

Tinker Bell’s Enchanted Trail – Comprehension:

def sum_odd_int_array(array: list[int]) -> int:
    return sum(x for x in array if x % 2 != 0)

array = [1, 2, 3, 4, 5]
print(sum_odd_int_array(array))

rust

Chapter 3: The Lost Boys’ Rusty Innovations

Deep in Neverland’s forests, the Lost Boys unveil their secret: a marvel of Rust. First, a familiar structure, echoing the old ways. Then, a creation so ingenious, it seemed woven from the threads of the future.

The Olden Design – Traditional For Loop:

fn sum_odd_int_array_for_loop(array: &[i32]) -> i32 {
    let mut sum = 0;
    for &x in array {
        if x % 2 != 0 {
            sum += x;
        }
    }
    sum
}

fn main() {
    let array = [1, 2, 3, 4, 5];
    println!("{}", sum_odd_int_array_for_loop(&array));
}

The Future Woven – Iterator Method:

fn sum_odd_int_array(array: &[i32]) -> i32 {
    array.iter().filter(|&&x| x % 2 != 0).sum()
}

fn main() {
    let array = [1, 2, 3, 4, 5];
    println!("{}", sum_odd_int_array(&array));
}

Epilogue: Magic in the Code

From the steady chants of Bash and Lua to the whimsical whispers of Python and the ingenious creations of Rust, each language brings its own spellbinding qualities. We’re reminded of the magic and wonder that each language holds.

As ageless programmers on a Neverland odyssey, we discover the art of transcending traditional loops, delving into the allure of modern programming languages and their captivating syntactic sugar.

In this Neverland of code, the adventure never ends, and with each line written, we continue to weave our own magical tales.

Until then, keep on coding with šŸŖ„ andšŸŖs.

The post To Code and Beyond: A Neverland Adventure in Bash, Lua, Python, and Rust appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/23/to-code-and-beyond-a-neverland-adventure-in-bash-lua-python-and-rust/feed/ 0
How to get notified of newly connected devices on your OpenWRT router https://prodsens.live/2023/11/24/how-to-get-notified-of-newly-connected-devices-on-your-openwrt-router/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-get-notified-of-newly-connected-devices-on-your-openwrt-router https://prodsens.live/2023/11/24/how-to-get-notified-of-newly-connected-devices-on-your-openwrt-router/#respond Fri, 24 Nov 2023 22:25:01 +0000 https://prodsens.live/2023/11/24/how-to-get-notified-of-newly-connected-devices-on-your-openwrt-router/ how-to-get-notified-of-newly-connected-devices-on-your-openwrt-router

So youā€™ve just set up OpenWRT with all the bells and whistles only to realize there is no…

The post How to get notified of newly connected devices on your OpenWRT router appeared first on ProdSens.live.

]]>
how-to-get-notified-of-newly-connected-devices-on-your-openwrt-router

So youā€™ve just set up OpenWRT with all the bells and whistles only to realize there is no out-of-the-box way to receive notifications for newly connected devices. No worries! With this tutorial, we will set up our OpenWRT server to send notifications to Pushover whenever a new device is connected to the server.

Letā€™s start with Pushover. Sign up is really easy, and pricing is very reasonable. Itā€™s typically a one-time purchase per device used. For myself, Iā€™ve purchased it for my iPhone and really enjoy its simplicity. Once signed up, create a new application with your preferred options. Lastly, find your appā€™s api key along with your user key. They should look something like this:

API Key
User Key

With pushover ready to go, letā€™s head back to our OpenWRT server. Create a new script file called new_device_notification.sh with the following lines:

#!/bin/sh

cat << "EOF" > /etc/hotplug.d/dhcp/90-newdev
[ "$ACTION" == "add" ] || exit 0
# [ "$ACTION" == "add" -o "$ACTION" == "update" ] || exit 0
known_macs="https://dev.to/etc/known_macs"
user_key="your-user-key"
api_key="your-api-key"
if ! /bin/grep -iq "$MACADDR" "$known_macs"; then
  msg="New device detected:
MAC: $MACADDR
IP: $IPADDR
Hostname: $HOSTNAME"
  echo "$MACADDR $IPADDR $HOSTNAME" >> /etc/known_macs
  curl -s 
       --form-string "token=$api_key" 
       --form-string "user=$user_key" 
       --form-string "title=New Device" 
       --form-string "message=$msg" 
       https://api.pushover.net/1/messages.json
fi
exit 0
EOF

Replace your-api-key and your-user-key with the values provided from Pushover. This script will check for new devices on your DHCP server as these devices make connections. If the server has not seen it before, it will add the device to a list of known devices and send you a notification.

Finally, letā€™s make this script runnable and execute it:

chmod +x new_device_notification.sh
./new_device_notification.sh

And thatā€™s it! Simply restart your server and you should begin receiving messages from Pushover. You may receive many messages at the beginning as existing devices are added, but from then on, only new devices will trigger a message.

Enjoy!

The post How to get notified of newly connected devices on your OpenWRT router appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/24/how-to-get-notified-of-newly-connected-devices-on-your-openwrt-router/feed/ 0
Make sure the python env is always running in cloud shell. https://prodsens.live/2023/09/28/make-sure-the-python-env-is-always-running-in-cloud-shell/?utm_source=rss&utm_medium=rss&utm_campaign=make-sure-the-python-env-is-always-running-in-cloud-shell https://prodsens.live/2023/09/28/make-sure-the-python-env-is-always-running-in-cloud-shell/#respond Thu, 28 Sep 2023 12:24:36 +0000 https://prodsens.live/2023/09/28/make-sure-the-python-env-is-always-running-in-cloud-shell/ make-sure-the-python-env-is-always-running-in-cloud-shell.

Introduction Hi, I am Akshay Rao, recently working with python in AWS cloud shell every time, i had…

The post Make sure the python env is always running in cloud shell. appeared first on ProdSens.live.

]]>
make-sure-the-python-env-is-always-running-in-cloud-shell.

Introduction
Hi, I am Akshay Rao, recently working with python in AWS cloud shell every time, i had to activate the python env whenever open the cloud shell. This was annoying me, so found a solution.
Pre-requisites

AWS account

Let’s start

  1. Login to the AWS account and open the cloud shell form the console.
    Image 1
  2. Create a python env in it.
    everytime i had to execute the

    source ~/.venv/bin/activate

    to activate the py env.
    Image 2

  3. Now we can put this command in the ~/.bashrc file and it will run this command when ever the cloud shell is opened.
    Image 3
  4. to test open new tab or just source the bashrc file.
    Image 4
  5. from now on we dont need to remember to activate the py env before working.

Thank you

The post Make sure the python env is always running in cloud shell. appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/28/make-sure-the-python-env-is-always-running-in-cloud-shell/feed/ 0
āš ļø Don’t try this at home: A CMS written in Bash ONLY?? https://prodsens.live/2023/09/03/%e2%9a%a0%ef%b8%8f-dont-try-this-at-home-a-cms-written-in-bash-only/?utm_source=rss&utm_medium=rss&utm_campaign=%25e2%259a%25a0%25ef%25b8%258f-dont-try-this-at-home-a-cms-written-in-bash-only https://prodsens.live/2023/09/03/%e2%9a%a0%ef%b8%8f-dont-try-this-at-home-a-cms-written-in-bash-only/#respond Sun, 03 Sep 2023 17:25:39 +0000 https://prodsens.live/2023/09/03/%e2%9a%a0%ef%b8%8f-dont-try-this-at-home-a-cms-written-in-bash-only/ ļø-don’t-try-this-at-home:-a-cms-written-in-bash-only??

Here we go again. After building an image modal with CSS only (and completely neglecting accessibility (sorry, @grahamthedev))…

The post āš ļø Don’t try this at home: A CMS written in Bash ONLY?? appeared first on ProdSens.live.

]]>
ļø-don’t-try-this-at-home:-a-cms-written-in-bash-only??

Here we go again.

After building an image modal with CSS only (and completely neglecting accessibility (sorry, @grahamthedev)) and an attempt to establish CSS as a backend language (although it worked, you people didn’t like it very much for some reason. I wonder why.), we’re finally back with stuff you and I probably shouldn’t ever do.

Today: Let’s use Bash (yes, Bash) to create a CMS!

Pascal no

Pascal yes! Pascal always yes!

To give credit where credit is due: This idea came from a funny discussion with a co-worker about how to overcomplicate stuff. He came up with a “bash server”, and we started to exaggerate it more and more until I said, “Challenge accepted,” and here we are.

Disclaimer: We’ll neglect security, apart from some password encryption and most best practices. This tool will be something I’ll never ever ever use in production, and you, dear reader, should not do it either. Please. There. You’ve been warned.

Defining what it does

We’ll create a very basic CMS:

  • A user can log in and log out (so we need users and session handling)
  • A logged-in user can create, update and delete pages
  • An anonymous user can read all pages
  • A page consists of a navigation title, a route path, some markup and a flag if it should show up in the main navigation

However, to use Bash as a backend language, we first need to get it to handle HTTP requests. Luckily, there are Bash utilities we can use to listen to TCP requests and send responses, most notably netcat. We “only” need to parse a request and generate a response.

Once that’s working, we’ll use SQLite to load the requested page and render its markup.

Let’s move on to the first bits of code.

The database schema

The boring bits first. We’ll use the following database schema for our SQLite database and add a few default records:

CREATE TABLE IF NOT EXISTS pages (
  routePath TEXT NOT NULL,
  navTitle TEXT,
  isInMainNavigation BOOLEAN NOT NULL,
  markup TEXT NOT NULL
);

INSERT INTO pages VALUES
('/', 'Home', 1, '

Hello, Bash CMS!

'
), ('/about', 'About', 1, '

About

This page was created entirely with BashCMS!

Learn more

'
), ('/about/bash-cms', NULL, 0, '

About BashCMS

BashCMS is a CMS entirely written in Bash!

'
); CREATE TABLE IF NOT EXISTS users ( username TEXT NOT NULL, password TEXT NOT NULL /* Hash generated with sha256sum */ ); INSERT INTO users VALUES ('admin', 'fc8252c8dc55839967c58b9ad755a59b61b67c13227ddae4bd3f78a38bf394f7'); /* pw: admin */ CREATE TABLE IF NOT EXISTS sessions ( sessId TEXT NOT NULL, userRowId INTEGER );

We can access the database via the sqlite3 CLI tool, which we can feed with data from our Bash script.

The actual server

Let’s start with the Bash part now. To listen to HTTP requests, we use Netcat. We’ll be using the OpenBSD Netcat version.

In the listen mode, Netcat is interactive. It prints out the request details (i.e. the headers, the body, the HTTP method and all) on STDOUT and expects the user to write the response in the STDIN.

For those unfamiliar with Linux/Bash, STDIN and STDOUT are the default ways to communicate with a program. What we see in the terminal is usually STDOUT, and the keyboard input is STDIN. A program is allowed to read from STDIN and write to STDOUT.

Once we send something to Netcat, it sends that over the wire and terminates. This means that we can only handle a single request at a time. For the server to run continuously, we need to start Netcat again and let it listen after it has terminated.

To read and write from Netcat, we also need a way to programmatically read STDOUT and write to STDIN. We can do this using a utility called coproc, which executes a given command asynchronously in a subshell. We do nothing as long as Netcat is waiting for some incoming requests. Only once Netcat starts to write to the STDOUT do we start reading and save it to a variable.

There is, however, one small problem: Netcat does not tell us if and when it’s finished writing to STDOUT. We need to determine that ourselves. The most straightforward approach is to wait for an empty new line and stop there.

We basically end up with a structure like this:

while true # You know your script will be fun when it starts with an endless loop.
do
  coproc nc -l -p 1440 -q1 -v # Spawns netcat such that we can read from and write to it

  REQ_RAW="" # This will contain the entire request
  IFS="" # Delimiter for `read`

  while read -r TMP; do # Read every line from STDOUT
    REQ_RAW+=$TMP$'n' # Append the line to the REQ variable

    # If the length of TMP is equal to one byte
    if [[ ${#TMP} -eq 1 ]] ; then
      break
    fi
  done <&"${COPROC[0]}" # Reads from the coproc STDOUT, line by line

  echo $REQ_RAW # Output the request for testing purposes

  kill "$COPROC_PID" # Kill the process for the subsequent request
  wait "$COPROC_PID" # Wait until it's actually gone
done

If you’re really unfamiliar with Bash, this looks intimidating. It might even do so for people who know Bash. I learned a lot during this experiment, and I am deeply convinced that Bash works very much like quantum physics: One does not understand Bash; one gets used to it.

Back to business… The “empty line” approach breaks down as soon as we want to read the HTTP body in case of a POST request. Luckily, HTTP knows a header called Content-Length that tells us the exact number of bytes.

This blows up the code for our server tremendously:

while true
do
  coproc nc -l -p 1337 -q1 -v # Spawns netcat such that we can read from and write to it. Listens on port 1337.

  REQ_RAW="" # This will contain the entire request
  IFS="" # Delimiter for `read`

  while read -r TMP; do
    REQ_RAW+=$TMP$'n' # Append the line to the REQ variable

    TMPLEN=$(echo $TMP|wc -c) # Figure out the length of $TMP in bytes

    # Deduct the length of the read bytes from the rest of the body length
    if [[ $BODYLENGTH -ge 0 ]]; then # Still some body left to read
      BODYLENGTH=$((BODYLENGTH - TMPLEN))
    fi

    # If the request has a body (determined by the header, which is usually the last one)
    # We continue reading the exact number of bytes
    if [[ "$TMP" =~ ^"Content-Length: " ]]; then
      BODYLENGTH=$(echo "$TMP"|grep -o '[[:digit:]]+')
      HAS_BODY=1
    fi

    # Read the entire body; abort reading
    if [[ $HAS_BODY -eq 1 ]] && [[ $BODYLENGTH -le 0 ]]; then
      break
    fi

    # No body but empty line encountered, abort reading
    if [[ $HAS_BODY -eq 0 ]] && [[ $TMPLEN -le 2 ]]; then
      break
    fi
  done <&"${COPROC[0]}" # Reads from the coproc STDOUT, line by line

  # Display the entire request for debugging
  echo $REQ_RAW

  kill "$COPROC_PID" # Kill the process for the subsequent request
  wait "$COPROC_PID" # Wait until it's actually buried
done

This works well already. We basically have a request logger now. Progress!

The anatomy of a HTTP request

We first need to parse the request to determine what the server should execute. Let’s look at what we’re dealing with.

A typical HTTP request is structured like this:

[Method] [Path + Query String] HTTP/[HTTP Version]
[Headers]

[Body]

When I perform a GET request on the server, it outputs something like this:

GET / HTTP/1.1
User-Agent: PostmanRuntime/7.29.0
Accept: */*
Cache-Control: no-cache
Host: localhost:1440
Accept-Encoding: gzip, deflate, br
Connection: keep-alive

A POST request, on the other hand, could look like this:

POST / HTTP/1.1
User-Agent: PostmanRuntime/7.29.0
Accept: */*
Cache-Control: no-cache
Host: localhost:1440
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Content-Type: multipart/form-data; boundary=--------------------------328683080620751780512479
Content-Length: 169

----------------------------328683080620751780512479
Content-Disposition: form-data; name="hello"

world
----------------------------328683080620751780512479--

We can work with this.

Adding some logic

The request is stored as a single string in a variable called REQ_RAW, so we can parse it using several other Bash utilities.

We create a function called parse_request and put that into a separate file to keep things organized. We then call this function after the reading loop:

#!/usr/bin/bash

source ./server/parse_request.sh

while true # Continue doing this, kind of like an event loop.
do
  coproc nc -l -p 1440 -q1 -v # Spawns netcat such that we can read from and write to it

  ## ...

  # Declare an associative array called `REQUEST`
  declare -A REQUEST=()

  parse_request $REQ_RAW REQUEST

  # Print the contents of the associative array
  declare -p REQUEST

  # Add more magic here

  kill "$COPROC_PID" # Kill the process for the subsequent request
  wait "$COPROC_PID" # Wait until it's actually gone
done

This function needs to do a few things at once:

  • Determine the HTTP method
  • Determine the route the user has requested
  • Parse out any GET variables (i.e. ?foo=bar, etc.)
  • Parse out the body
  • Parse out cookies

We can parse the very first line of the request to get the HTTP method and route path. Afterwards, we parse the cookies and check if we need to parse a body, which only happens on POST and PUT requests.

#
# Parses the entire request
#
function parse_request() {
  RAW_REQ=$1

  # This makes the REQUEST associative array available to write to
  # We need to make sure to not call it REQUEST, though, because
  # that name is already reserved in the outer scope
  declare -n INNER_REQ="$2"

  # Extract the request line: method, path (+ query string) and version
  REQUESTLINE=`echo "${RAW_REQ}" | sed -n 1p`
  IFS=' ' read -ra PARTS <<< "$REQUESTLINE"
  METHOD=${PARTS[0]}
  REQUEST_PATH=${PARTS[1]}

  # Split query string from the actual route
  IFS='?' read -ra REQUEST_PATH_PARTS <<< "$REQUEST_PATH"
  REQUEST_ROUTE=${REQUEST_PATH_PARTS[0]}
  QUERY_STRING=${REQUEST_PATH_PARTS[1]}

  if [[ "$QUERY_STRING" != "" ]]; then
    parse_query_string $QUERY_STRING INNER_REQ
  fi

  parse_cookies $RAW_REQ INNER_REQ

  # If we're dealing with either a POST or a PUT request, chances are there's a form body.
  # We extract that with the previously found $FORMDATA_BOUNDARY.
  if [[ "$METHOD" == "POST" ]] || [[ "$METHOD" == "PUT" ]]; then
    parse_body $RAW_REQ INNER_REQ
  fi

  INNER_REQ["METHOD"]="$METHOD"
  INNER_REQ["ROUTE"]="$REQUEST_ROUTE"
}

The query string parsing is pretty straightforward:

#
# Parses the query string and assigns it to the request object
#
function parse_query_string() {
  RAW_QUERY_STRING=$1
  declare -n REQ_ARR="$2"

  # Split the query parameters into a hashmap
  IFS='&' read -ra QUERYPARTS <<< "$QUERY_STRING"
  for PART in "${QUERYPARTS[@]}"; do
    IFS='=' read -ra KEYVALUE <<< "$PART"
    KEY=${KEYVALUE[0]}
    VALUE=${KEYVALUE[1]}
    REQ_ARR["QUERY","$KEY"]="$VALUE"
  done
}

And so is the cookie parsing:

#
# Parses cookies out of the request headers
#
function parse_cookies() {
  RAW_REQ_BODY=$1
  declare -n REQ_ARR="$2"

  COOKIE_LINE=`echo $RAW_REQ_BODY|grep 'Cookie:'`
  COOKIE=${COOKIE_LINE#"Cookie:"}

  if [[ "$COOKIE" != "" ]]; then
    IFS=';' read -r -d '' -a COOKIEPARTS <<< "$COOKIE"

    for PART in "${COOKIEPARTS[@]}"; do
      if [[ "$PART" != "" ]]; then
        IFS='=' read -ra KEYVALUE <<< "$PART"
        KEY=${KEYVALUE[0]//" "/""} # Remove all spaces, so we don't have leading spaces
        VALUE=${KEYVALUE[1]}
        REQ_ARR["COOKIE","$KEY"]=${VALUE::-1}
      fi
    done
  fi
}

In both functions, we carefully rip the necessary parts out of the entire request and split it by some characters, namely ? and = for the query string and ; and = for the cookies. We then remove some unnecessary spaces and write it to the REQUEST associative array.

Parsing the body is more complex. We’re dealing with the multipart/form-data format to allow for multi-line strings and, potentially, file uploads. I found it actually more straightforward to work with than any URL encoding.

#
# Parses the POST body and assigns it to the request object
#
function parse_body() {
  RAW_REQ_BODY=$1
  declare -n REQ_ARR="$2"

  FORM_BOUNDARY_LINE=`echo $RAW_REQ_BODY|grep 'Content-Type: multipart/form-data; boundary='`
  FORM_BOUNDARY=${FORM_BOUNDARY_LINE#"Content-Type: multipart/form-data; boundary="}

  # Replace the $FORMDATA_BOUNDARY with a single character so we can split with that.
  TMP_BODY_PARTS=`echo "${RAW_REQ_BODY//"$FORM_BOUNDARY"/$'Ā§'}" | head -n -2` # We need to use _some_ character to use `read` here.

  IFS='Ā§' read -r -d '' -a BODYPARTS <<< "$TMP_BODY_PARTS"

  for PART in "${BODYPARTS[@]}"; do
    KEY=`echo "${PART}" | grep -o -P '(?<=name=").*?(?=")'`
    if [[ "$KEY" != "" ]]; then
      VALUE=`echo "${PART}" | head -n -1 | tail -n +4`
      REQ_ARR["BODY","$KEY"]=${VALUE::-1}
    fi
  done
}

When we run the code now with our GET request from before, we get the following output from our Bash server:

declare -A REQUEST=([ROUTE]="https://dev.to/" [METHOD]="GET" )

(Yes, declare -p creates a declare -A statement, so one could execute that again to have the same associative array.)

The mentioned POST request would output this:

declare -A REQUEST=([BODY,hello]="world" [ROUTE]="https://dev.to/" [METHOD]="POST" )

Neat!

Reacting to the request

Similar to the REQUEST array, we declare a RESPONSE array. This array will contain the DOM we deliver, the status code, and some headers, like Set-Cookie or Location for redirects.

Since we need to be able to tell users apart (some are logged in and some are not), we implement a function called set_session. This generates a session ID, writes it to the SQLite database, and sets a session cookie. Any following request from the same client will send that same session ID cookie.

function set_session() {
  declare -n REQ="$1"
  declare -n RES="$2"

  if [[ "${REQ[COOKIE,SESSID]}" != "" ]]; then
    # SESSID cookie was already set once; reset it
    RES["COOKIES,SESSID"]="${REQ[COOKIE,SESSID]}"
  else
    # No SESSID cookie, so let's generate one
    SESSID=`echo $RANDOM | md5sum | head -c 20; echo;` # Taken from SO.

    # Save cookie into database
    sqlite3 db.sqlite "insert into sessions values ('${SESSID}', NULL);" ".exit"
    RES["COOKIES,SESSID"]="$SESSID"
  fi
}

Notice how we need both the REQ and the RES array: We already write to the RESPONSE array by setting a COOKIES key with a sub-key called SESSID.

We call this function after we call parse_request:

while true; do

  # Request reading shenanigans

  declare -A REQUEST=()
  declare -A RESPONSE=()
  parse_request $REQ_RAW REQUEST

  set_session REQUEST RESPONSE

  # More stuff later, don't worry

  kill "$COPROC_PID" # Kill the process for the subsequent request
  wait "$COPROC_PID" # Wait until it's actually gone
done

Next, we can implement a function to react to the actual request. We call it render_cms_page. In there, we look in the database for any entry with a route that matches the route from the request:

function render_cms_page() {
  REQUEST=$1
  declare -n RES="$2"

  DOM=`sqlite3 db.sqlite "select markup from pages where routePath='${REQUEST[ROUTE]}';" ".exit"`

  if [[ "$DOM" == "" ]]; then
    RES["BODY"]=`render_response "Not found."`
    RES["STATUS"]="404 Not found"
  else
    RES["BODY"]=`render_response $DOM`
    RES["STATUS"]="200 OK"
  fi
}

You might notice the render_response function in there, too. We use that to generate all of the surrounding HTML, such as a page header and navigation and some CSS:

function render_response() {
  DOC_START=`doc_start $1`
  PAGE_HEADER=`page_header`
  DOC_END=`doc_end`

  cat <<EOF
    $DOC_START
    $PAGE_HEADER

    
$2
$DOC_END EOF }

In there, however, we have the functions doc_start, page_header and doc_end:

function doc_start() {
  cat <<EOF


  
    
    </span><span class="nv">$1</span><span class="sh">
    
    
  
  
EOF
}

function doc_end() {
  cat <<EOF
  

EOF
}

function page_header() {
  # Fetch header-relevant pages
  cat <<EOF
  

Bash CMS

EOF }

And with that, we’re almost done.

The last step to an actual response is to render a response string. Much like an HTTP request, a response is a single multi-line string with different parts. We only need to assemble it correctly:

function generate_response_string() {
  declare -n RES="$1"

  # Transform cookie entries into Set-Cookie headers
  COOKIES=""
  for RESKEY in "${!RES[@]}"; do
    if [[ "$RESKEY" =~ ^"COOKIES," ]]; then
      COOKIE_NAME=${RESKEY#"COOKIES,"}
      COOKIES+="Set-Cookie: $COOKIE_NAME=${RES[$RESKEY]}
" # Adds a newline after this Set-Cookie header.
    fi
  done

  RES["CONTENT_TYPE"]="text/html"
  RES["HEADERS","Content-Type"]="${RES[CONTENT_TYPE]}; charset=UTF-8"
  RES["HEADERS","Server"]="Bash. Please don't send cat /etc/passwd as a cookie because hacking is bad :("

  HEADERS=""
  for RESKEY in "${!RES[@]}"; do
    if [[ "$RESKEY" =~ ^"HEADERS," ]]; then
      HEADER_NAME=${RESKEY#"HEADERS,"}
      HEADERS+="${HEADER_NAME}: ${RES[$RESKEY]}
" # Adds a newline after this Set-Cookie header.
    fi
  done

  # declare -p RES

  cat <<EOF
HTTP/1.1 ${RES[STATUS]}
${COOKIES::-1}
${HEADERS}

${RES[BODY]}
EOF
}

And we’re good to go. Let’s see what this does:

A header, navigation and the content from the  raw `/` endraw  page!

(On a side note: Using backticks anywhere is making me nervous now. Who knows what it’ll execute…)

Adding the routes

Now that we’ve implemented the dynamic routes let’s take care of the static ones, such as /login, /edit, /add-new-page, /logout and /delete. For that, we add two more functions: One for the login form and one for the edit form:

function render_login_page() {
  cat <<EOF
  
$1">
EOF } function render_edit_form() { NAVTITLE=$1 IS_IN_MAIN_NAVIGATION="" ROUTEPATH=$3 DOM=$4 if [[ "$2" == "1" ]]; then IS_IN_MAIN_NAVIGATION=" checked" fi cat <<EOF
$NAVTITLE"> $ROUTEPATH">
EOF }

And lastly, we expand the render_cms_page function:

function render_cms_page() {
  REQUEST=$1
  declare -n RES="$2"

  if [[ "${REQUEST[ROUTE]}" == "https://dev.to/login" ]]; then
    if [[ "${REQUEST[METHOD]}" == "POST" ]]; then
      USERNAME=${REQUEST[BODY,username]}
      PASSWORD=`echo ${REQUEST[BODY,password]} | sha256sum`
      USERID=`sqlite3 db.sqlite "select rowid from users where username='$USERNAME' and password='${PASSWORD::-3}'"`

      if [[ "$USERID" == "" ]]; then
        DOM=`render_login_page $USERNAME`
        DOM+="

Username or password incorrect

"
RES["BODY"]=`render_response "Login" $DOM` RES["STATUS"]="200 OK" else sqlite3 db.sqlite "update sessions set userRowId = $USERID where sessId = '${REQUEST[COOKIE,SESSID]}'" RES["STATUS"]="307 Temporary Redirect" RES["HEADERS","Location"]="/" fi else DOM=`render_login_page` RES["BODY"]=`render_response "Login" $DOM` RES["STATUS"]="200 OK" fi DOM=`render_login_page ${REQUEST[BODY,username]}` elif [[ "${REQUEST[ROUTE]}" == "https://dev.to/logout" ]]; then sqlite3 db.sqlite "update sessions set userRowId = NULL where sessId = '${REQUEST[COOKIE,SESSID]}'" RES["STATUS"]="307 Temporary Redirect" RES["HEADERS","Location"]="/" elif [[ "${REQUEST[ROUTE]}" == "https://dev.to/add-new-page" ]]; then if [[ "${REQUEST[METHOD]}" == "POST" ]]; then IS_IN_MAIN_NAVIGATION="0" if [[ "${REQUEST[BODY,is_in_navigation]}" == "1" ]]; then IS_IN_MAIN_NAVIGATION="1" fi sqlite3 db.sqlite "insert into pages values ('${REQUEST[BODY,routepath]}', '${REQUEST[BODY,navtitle]}', ${IS_IN_MAIN_NAVIGATION}, '${REQUEST[BODY,dom]}');" ".exit" RES["STATUS"]="307 Temporary Redirect" RES["HEADERS","Location"]="${REQUEST[BODY,routepath]}" else DOM=`render_edit_form` RES["BODY"]=`render_response "New page" $DOM` RES["STATUS"]="200 OK" fi elif [[ "${REQUEST[ROUTE]}" == "https://dev.to/edit" ]]; then LOGGEDIN_USERID=`sqlite3 db.sqlite "select userRowId from sessions where sessId = '${REQUEST[COOKIE,SESSID]}'" ".exit"` if [[ "$LOGGEDIN_USERID" == "" ]]; then RES["STATUS"]="403 Forbidden" RES["BODY"]=`render_response "Nope" "Not allowed to do that"` else if [[ "${REQUEST[METHOD]}" == "POST" ]]; then IS_IN_MAIN_NAVIGATION="0" if [[ "${REQUEST[BODY,is_in_navigation]}" == "1" ]]; then IS_IN_MAIN_NAVIGATION="1" fi sqlite3 db.sqlite "update pages set routePath='${REQUEST[BODY,routepath]}', navTitle='${REQUEST[BODY,navtitle]}', isInMainNavigation=${IS_IN_MAIN_NAVIGATION}, markup='${REQUEST[BODY,dom]}' where routePath='${REQUEST[QUERY,route]}';" ".exit" RES["STATUS"]="307 Temporary Redirect" RES["HEADERS","Location"]="${REQUEST[BODY,routepath]}" else PAGE=`sqlite3 db.sqlite "select navTitle, isInMainNavigation, routePath, markup from pages where routePath='${REQUEST[QUERY,route]}'"` IFS='|' read -r -d '' -a PAGEPARTS <<< "$PAGE" DOM=`render_edit_form ${PAGEPARTS[0]} ${PAGEPARTS[1]} ${PAGEPARTS[2]} ${PAGEPARTS[3]}` RES["BODY"]=`render_response "Edit" $DOM` RES["STATUS"]="200 OK" fi fi elif [[ "${REQUEST[ROUTE]}" == "https://dev.to/delete" ]]; then LOGGEDIN_USERID=`sqlite3 db.sqlite "select userRowId from sessions where sessId = '${REQUEST[COOKIE,SESSID]}'" ".exit"` if [[ "$LOGGEDIN_USERID" == "" ]]; then RES["STATUS"]="403 Forbidden" RES["BODY"]=`render_response "Nope" "Not allowed to do that"` else sqlite3 db.sqlite "delete from pages where routePath='${REQUEST[QUERY,route]}';" ".exit" echo "delete from pages where routePath='${REQUEST[QUERY,route]}';" RES["STATUS"]="307 Temporary Redirect" RES["HEADERS","Location"]="/" fi else DOM=`sqlite3 db.sqlite "select markup from pages where routePath='${REQUEST[ROUTE]}';" ".exit"` LOGGEDIN_USERID=`sqlite3 db.sqlite "select userRowId from sessions where sessId = '${REQUEST[COOKIE,SESSID]}'" ".exit"` if [[ "$LOGGEDIN_USERID" != "" ]]; then DOM+="" fi if [[ "$DOM" == "" ]]; then RES["BODY"]=`render_response "Not found" "Not found."` RES["STATUS"]="404 Not found" else RES["BODY"]=`render_response "Bash CMS!" $DOM` RES["STATUS"]="200 OK" fi fi }

And we’re good. With 443 lines of code, we’ve written a basic CMS from scratch in Bash only!

Demo time!

(The gif might take a few seconds to load…)

The BashCMS in action!

Q&A time!

Q: Does it perform well?

A: No. Not at all. This script can handle a single request at a time. Even Apache can handle several hundred connections at once.

Q: Should I use this…

A: No. Please, for the love of everything, don’t.

Q: Does the font need to be monospaced? That’s so 1990s

A: Yes. We’re using Bash, so why shouldn’t it be monospaced?

Q: Anything else?

A: I use Arch, by the way.

I hope you enjoyed reading this article as much as I enjoyed writing it! If so, leave a ā¤! I write tech articles in my free time and like to drink coffee every once in a while.

If you want to support my efforts, you can offer me a coffee ā˜• or follow me on Twitter šŸ¦! You can also support me directly via Paypal!

Buy me a coffee button

The post āš ļø Don’t try this at home: A CMS written in Bash ONLY?? appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/03/%e2%9a%a0%ef%b8%8f-dont-try-this-at-home-a-cms-written-in-bash-only/feed/ 0
Shell Scripting 101: Essential Commands for Every Developer https://prodsens.live/2023/08/27/shell-scripting-101-essential-commands-for-every-developer/?utm_source=rss&utm_medium=rss&utm_campaign=shell-scripting-101-essential-commands-for-every-developer https://prodsens.live/2023/08/27/shell-scripting-101-essential-commands-for-every-developer/#respond Sun, 27 Aug 2023 08:25:48 +0000 https://prodsens.live/2023/08/27/shell-scripting-101-essential-commands-for-every-developer/ shell-scripting-101:-essential-commands-for-every-developer

Shell Scripting 101: Essential Commands for Every Developer Diving into the universe of shell scripting? Welcome aboard! Shell…

The post Shell Scripting 101: Essential Commands for Every Developer appeared first on ProdSens.live.

]]>
shell-scripting-101:-essential-commands-for-every-developer

Shell Scripting 101: Essential Commands for Every Developer

Diving into the universe of shell scripting? Welcome aboard! Shell scripting is a potent means to automate mundane tasks, string several commands together, and interact dynamically with your system. Here’s your beginner-friendly guide to 101 essential shell commands.

Let’s dive into 50 shell commands:

The Basics

  1. echo – Display a line of text

It’s one of the simplest commands. It’s frequently used in shell scripts to display status or to produce formatted outputs.

echo [option] [string]

   echo "Hello, World!"
  1. man – Manual pages

If you are a bash scripter, this is the MOST IMPORTANT command you need thorughout your journey. Even our blog isn’t compared to what help this command provides us. It is used to display manual pages for commands, giving detailed information on usage.

man [option] command

   man ls
  1. ls – List contents of directory

Lists files and directories in the current directory, with options to format or filter the results.

ls [option] [directory]

   ls -l /home/user
  1. cd – Change Directory

Navigate to a different part of the filesystem.

cd [directory]

   cd /home/user/documents

Working With Files and Directories

  1. touch – Create an empty file

Generates new files quickly or updates timestamps on existing ones.

touch [option] filename

   touch sample.txt
  1. cp – Copy files or directories

Duplicate files or directories from one location to another.

cp [option] source destination

   cp file1.txt file2.txt
  1. mv – Move or rename files/directories

Transfer or rename files and directories.

mv [option] source destination

   mv oldname.txt newname.txt
  1. rm – Remove files or directories

Delete files or directories. Use with caution; it’s irreversible.

rm [option] file

   rm unwantedfile.txt
  1. mkdir – Make directories

Create new directories.

mkdir [option] directory

   mkdir new_directory
  1. rmdir – Remove empty directories

Delete empty directories.

rmdir [option] directory

   rmdir empty_directory

Manipulating Text and Files

  1. cat – Concatenate and display file contents

Read and display text files.

cat [option] file

   cat file.txt
  1. grep – Search text using patterns

Hunt for specific patterns in text.

grep [option] pattern [file...]

   grep 'hello' file.txt
  1. sed – Stream editor

Powerful tool to parse and modify text in a data stream or file.

sed [option] 'command' file

   sed 's/apple/orange/' file.txt

Permissions, Ownerships and Monitoring

  1. chmod – Change file permissions

Adjust permissions on files or directories.

chmod [option] mode file

   chmod 755 script.sh
  1. chown – Change file owner and group

Alter the ownership of files or directories.

chown [option] owner[:group] file

   chown user:group file.txt
  1. ps – Report process status

Snapshot of current processes.

ps [option]

   ps aux
  1. top – Display dynamic real-time processes

Monitor system tasks in real-time.

top [option]

   top
  1. kill – Terminate or signal a process

Send signals to specific processes, usually to terminate.

kill [signal or option] pid

   kill -9 12345
  1. history – Command history

Display commands recently used.

history [option]

   history
  1. find – Search for files in directories

Locate files in the system based on various criteria.

find [path...] [expression]

   find /home/user -name "*.txt"

21.pwd – Print Working Directory

Displays the full pathname of the current directory, helping to understand where you are in the filesystem.

pwd [option]

   pwd

Compressing and Decompressing files

  1. tar – Archive utility

Combine multiple files into one or extract files from such a combined archive.

tar [option] [file...]

   tar -xvf archive.tar
  1. gzip – Compress files

Reduce file sizes.

gzip [option] file

   gzip file.txt
  1. gunzip – Decompress files

Decompress .gz files.

gunzip [option] file.gz

   gunzip file.txt.gz

Networking

  1. ping – Network diagnostic tool

Check the network connection to a specific IP or domain.

ping [option] destination

   ping google.com
  1. netstat – Network statistics

Display network connections, routing tables, and interface statistics.

netstat [option]

   netstat -tuln
  1. ifconfig – Display or configure network interfaces

Show or set network interfaces.

ifconfig [interface] [options]

   ifconfig eth0
  1. ssh – Secure shell remote login

Connect to remote servers securely.

ssh [option] user@host

   ssh user@domain.com
  1. scp – Securely copy files between hosts

Transfer files between local and remote hosts securely.

scp [option] source destination

   scp file.txt user@remote.com:/path/
  1. wget – Non-interactive network downloader

Download files from the internet.

wget [option] [URL]

   wget http://example.com/file.zip
  1. curl – Command line tool for transferring data

Fetch data from a URL.

curl [option] [URL]

   curl -O http://example.com/file.zip
  1. cut – Remove sections from lines of files

Extract and display specific columns from a file’s content.

cut OPTION... [FILE]...

   cut -f1,3 -d',' data.csv

Displaying Files and contents

  1. head – Output the first part of files

Display the beginning of a file.

head [option] [file...]

   head -n 10 file.txt
  1. tail – Output the last part of files

Show the end of a file, often used to display logs.

tail [option] [file...]

   tail -f /var/log/syslog
  1. sort – Sort lines of text files

Organize the lines in text files.

sort [option] [file...]

   sort file.txt
  1. date – Display or set the system date and time

Show the current date and time or set a new one.

date [option]

   date
  1. cal – Display a calendar

Showcase a simple calendar.

cal [option]

   cal 12 2023

System Checkup and Reports

  1. df – Report file system disk space usage

Check available disk space.

df [option]

   df -h
  1. du – Estimate file and directory space usage

Gauge how much space a directory or file uses.

du [option] [file...]

   du -sh /home/user/
  1. alias – Create an alias for a command

Shorten or customize command names.

alias name='command'

   alias ll='ls -la'
  1. unalias – Remove an alias

Remove a previously defined alias.

unalias alias_name

   unalias ll
  1. which – Shows the full path of commands

Display where a particular program is located.

which [command]

   which ls
  1. passwd – Change user password

Modify the password for a user.

passwd [username]

   passwd john
  1. wc – Print newline, word, and byte counts for a file

Count the number of lines, words, and bytes.

wc [option] [file...]

   wc file.txt
  1. diff – Compare files line by line

Contrast the contents of two files.

diff [option] file1 file2

   diff file1.txt file2.txt
  1. tee – Read from standard input and write to standard output and files

Useful to split the output of a command to both display and save in a file simultaneously.

command | tee [option] file

   ls | tee output.txt

Running System Jobs

  1. bg – Put jobs in background

Send a process to run in the background.

bg [job_id]

   bg %1
  1. fg – Bring jobs to foreground

Retrieve a process to run in the foreground.

fg [job_id]

   fg %1
  1. jobs – List active jobs

Display the jobs currently running in the background.

jobs [option]

   jobs
  1. clear – Clear the terminal screen

Clean the console display.

clear

   clear

Arming yourself with the knowledge of these 50 shell commands will significantly enhance your command line prowess. Remember, the key to mastering them is regular practice. Happy coding!

And that’s our detailed guide to 50 foundational shell commands. While it’s not all 101 commands as the title says, mastering these will provide a strong foundation for any developer or system administrator. Remember, practice makes perfect. Explore, experiment, and most importantly, enjoy the journey into the world of shell scripting!

Happy scripting!

The post Shell Scripting 101: Essential Commands for Every Developer appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/08/27/shell-scripting-101-essential-commands-for-every-developer/feed/ 0