Andrea Bailiff-Gush, Author at ProdSens.live https://prodsens.live/author/andrea-bailiff-gush/ News for Project Managers - PMI Sun, 14 Apr 2024 05:20:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Andrea Bailiff-Gush, Author at ProdSens.live https://prodsens.live/author/andrea-bailiff-gush/ 32 32 CSS Text Handling: text-overflow, overflow-wrap, and More! https://prodsens.live/2024/04/14/css-text-handling-text-overflow-overflow-wrap-and-more/?utm_source=rss&utm_medium=rss&utm_campaign=css-text-handling-text-overflow-overflow-wrap-and-more https://prodsens.live/2024/04/14/css-text-handling-text-overflow-overflow-wrap-and-more/#respond Sun, 14 Apr 2024 05:20:51 +0000 https://prodsens.live/2024/04/14/css-text-handling-text-overflow-overflow-wrap-and-more/ css-text-handling:-text-overflow,-overflow-wrap,-and-more!

CSS Text Handling: text-overflow, overflow-wrap, and More! When it comes to fine-tuning how text behaves within its containers…

The post CSS Text Handling: text-overflow, overflow-wrap, and More! appeared first on ProdSens.live.

]]>
css-text-handling:-text-overflow,-overflow-wrap,-and-more!

CSS Text Handling: text-overflow, overflow-wrap, and More!

When it comes to fine-tuning how text behaves within its containers on a webpage, CSS provides an array of properties to help. Let’s delve into not just text-overflow and overflow-wrap, but also other valuable properties that contribute to a polished text display.

text-overflow: ellipsis;

The text-overflow property, especially when set to ellipsis, is a lifesaver for managing text that exceeds its container’s width. This property tells the browser to display an ellipsis (…) at the point where the text overflows its container.

Here’s the basic setup:

.ellipsis-text {
  white-space: nowrap;    /* Prevents text from wrapping */
  overflow: hidden;       /* Hides overflowing content */
  text-overflow: ellipsis; /* Displays ellipsis for truncated text */
  width: 200px;           /* Example width */
}

And in the HTML:

 class="ellipsis-text">
  This is a long text that will be truncated with an ellipsis when it's too long to fit in the container.

In this example, the text inside the .ellipsis-text div will be truncated with an ellipsis if it’s too long to fit within the 200px wide container. The white-space: nowrap; ensures that the text does not wrap.

overflow-wrap: break-word;

On the other hand, overflow-wrap steps in to handle long words or strings that are too large to fit within a container’s width. Setting it to break-word allows the browser to break long words and wrap them onto the next line as needed.

Here’s the CSS for overflow-wrap:

.break-word {
  width: 200px;           /* Example width */
  overflow-wrap: break-word; /* Allows breaking long words */
}

And the corresponding HTML:

 class="break-word">
  ThisIsALongUnbreakableWordThatWillBeBrokenByTheBrowserToPreventOverflowingTheContainer

In this example, overflow-wrap: break-word; ensures that the long, unbreakable word will be broken by the browser at appropriate points to prevent it from overflowing the container, making the text wrap onto the next line as needed.

word-wrap: break-word;

The word-wrap property is similar to overflow-wrap, and it’s another tool for managing long words in text. When set to break-word, it allows the browser to break long words and wrap them onto the next line.

Here’s how it looks in CSS:

.word-wrap-example {
  width: 200px;         /* Example width */
  word-wrap: break-word; /* Allows breaking long words */
}

And in HTML:

 class="word-wrap-example">
  ThisIsALongUnbreakableWordThatWillBeBrokenByTheBrowserToPreventOverflowingTheContainer

With word-wrap: break-word;, the browser will break long words to fit within the container’s width, ensuring a neat text display.

white-space: pre-wrap;

The white-space property controls how whitespace inside an element is handled. When set to pre-wrap, it preserves both spaces and line breaks. This can be useful for displaying pre-formatted text or ensuring that line breaks are respected.

Here’s the CSS for white-space: pre-wrap;:

.pre-wrap-example {
  white-space: pre-wrap; /* Preserves spaces and line breaks */
  width: 200px;          /* Example width */
}

And the corresponding HTML:

 class="pre-wrap-example">
  This is some pre-formatted text with line breaks:
  Line 1
  Line 2
  Line 3

With white-space: pre-wrap;, the text will display with line breaks preserved, making it ideal for displaying blocks of text that need to retain their original formatting.

Putting It All Together

Combining these properties gives us powerful control over how text is displayed on our web pages. Here’s an example that utilizes multiple properties:

.text-display {
  width: 200px;           /* Example width */
  white-space: nowrap;    /* Prevents text from wrapping */
  overflow: hidden;       /* Hides overflowing content */
  text-overflow: ellipsis; /* Displays ellipsis for truncated text */
  overflow-wrap: break-word; /* Allows breaking long words */
}

And in HTML:

 class="text-display">
  ThisIsALongUnbreakableWordThatWillBeBrokenByTheBrowserToPreventOverflowingTheContainer This is a long text that will be truncated with an ellipsis when it's too long to fit in the container.

With this combined approach, the text will first attempt to truncate with an ellipsis if it’s too long. If there are long unbreakable words, the browser will intelligently break them to fit within the container’s width, providing a neat and readable text display.

In conclusion, mastering text-overflow, overflow-wrap, word-wrap, and white-space properties allows for elegant and effective handling of text within containers. Whether it’s truncating long text, breaking unwieldy words, or preserving formatting, CSS offers a versatile toolkit for achieving the desired text display on your web pages.

The post CSS Text Handling: text-overflow, overflow-wrap, and More! appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/04/14/css-text-handling-text-overflow-overflow-wrap-and-more/feed/ 0
Dart Functions and Parameter Types — Part 3 https://prodsens.live/2024/03/01/dart-functions-and-parameter-types-part-3/?utm_source=rss&utm_medium=rss&utm_campaign=dart-functions-and-parameter-types-part-3 https://prodsens.live/2024/03/01/dart-functions-and-parameter-types-part-3/#respond Fri, 01 Mar 2024 06:20:46 +0000 https://prodsens.live/2024/03/01/dart-functions-and-parameter-types-part-3/ dart-functions-and-parameter-types-—-part-3

Exploring Dart Functions and Parameter Types — Positional Arguments, One-Line Function, Optional Parameters. Functions in Dart provide a…

The post Dart Functions and Parameter Types — Part 3 appeared first on ProdSens.live.

]]>
dart-functions-and-parameter-types-—-part-3

Exploring Dart Functions and Parameter Types — Positional Arguments, One-Line Function, Optional Parameters.

Functions in Dart provide a powerful way to organize code and encapsulate functionality. In this blog post, we’ll explore different aspects of Dart functions, including positional arguments, one-line functions, optional parameters, and named parameters.

1. Positional Arguments
Positional arguments are the most basic type of arguments in Dart functions. Their position in the function call defines them.

void add(int x, int y) {
  print(x + y);
}

In the add function, x and y are positional arguments. When calling add(3, 5), x will be assigned the value 3 and y will be assigned 5, resulting in 8 being printed.

2. One-Line Function
One-line functions provide a concise way to define simple functions in Dart. They are often used for short and straightforward operations.

void addsingleLineFunction(int x, int y) => print(x + y);

The addsingleLineFunction function is a one-line equivalent of the add function. It takes two positional arguments x and y and prints their sum.

Another Method
Another method demonstrates another way to define a function in Dart, which implicitly returns a value.

addsingleLineFunctionMethod(int x, int y) => x + y;

Unlike the previous functions, addsingleLineFunctionMethod implicitly returns the sum of x and y. It does not use the print function, making it suitable for returning values directly.

3. Optional Parameters
Optional parameters allow for flexibility in function calls by making certain parameters optional.

Positionally Optional

addPostional(int x, [int y = 0]) => x + y;

The addPostional function takes one required positional argument x and one optional positional argument y with a default value of 0. This means you can call addPostional(3) or addPostional(3, 5).

Named Optional

calculateArea({double width = 0.0, double height = 0.0}) => width * height;

calculateArea computes the area of a rectangle. It uses named optional parameters width and height with default values of 0.0. This allows for clear and explicit function calls like calculateArea(width: 10.0, height: 5.0).

5. Named Required
Named required parameters ensure that certain arguments are provided during function calls.

calculateAreaRequired({required double width , required double height}) => width * height;

calculateAreaRequired calculates the area of a rectangle. The parameters width and height are named and required, meaning you must specify them during the function call.

Putting It All Together

void main() {
  // Function with no parameters
  void greet() {
    print("Hello, Dart!");
  }

  // Function with required parameters
  void greetWithName(String name) {
    print("Hello, $name!");
  }

  // Function with optional parameters
  void greetWithOptional({String name = 'sadanand gadwal'}) {
    print("Hello, $name!");
  }

  // Function with positional parameters
  void greetWithPositional([String name = 'sadanand gadwal']) {
    print("Hello, $name!");
  }

  // Function with return value
  int add(int a, int b) {
    return a + b;
  }

  greet(); // Hello, Dart!
  greetWithName("sadanand gadwal"); // Hello, sadanand gadwal!
  greetWithOptional(name: "sadanand gadwal"); // Hello,sadanand gadwal!
  greetWithOptional(); // Hello, sadanand gadwal!
  greetWithPositional("sadanand gadwal"); // Hello, sadanand gadwal!
  greetWithPositional(); // Hello, sadanand gadwal!
  print(add(3, 5)); // 8
}

Conclusion
Understanding these different types of parameters and function definitions in Dart is essential for writing clean, readable, and flexible code. They enable you to express complex logic concisely and organized, making your Dart programs more maintainable and efficient.

The post Dart Functions and Parameter Types — Part 3 appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/03/01/dart-functions-and-parameter-types-part-3/feed/ 0
How Do Snap Judgments Shape Our Perceptions? https://prodsens.live/2023/11/29/how-do-snap-judgments-shape-our-perceptions/?utm_source=rss&utm_medium=rss&utm_campaign=how-do-snap-judgments-shape-our-perceptions https://prodsens.live/2023/11/29/how-do-snap-judgments-shape-our-perceptions/#respond Wed, 29 Nov 2023 00:24:24 +0000 https://prodsens.live/2023/11/29/how-do-snap-judgments-shape-our-perceptions/ how-do-snap-judgments-shape-our-perceptions?

How do snap judgments shape our perceptions of others? Discuss your views. Follow the DEVteam for more discussions…

The post How Do Snap Judgments Shape Our Perceptions? appeared first on ProdSens.live.

]]>
how-do-snap-judgments-shape-our-perceptions?

How do snap judgments shape our perceptions of others? Discuss your views.

Follow the DEVteam for more discussions and online camaraderie!

The post How Do Snap Judgments Shape Our Perceptions? appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/29/how-do-snap-judgments-shape-our-perceptions/feed/ 0
🐘 Introduction to Docker Compose https://prodsens.live/2023/10/16/%f0%9f%90%98-introduction-to-docker-compose/?utm_source=rss&utm_medium=rss&utm_campaign=%25f0%259f%2590%2598-introduction-to-docker-compose https://prodsens.live/2023/10/16/%f0%9f%90%98-introduction-to-docker-compose/#respond Mon, 16 Oct 2023 10:24:43 +0000 https://prodsens.live/2023/10/16/%f0%9f%90%98-introduction-to-docker-compose/ -introduction-to-docker-compose

Author: Muhammad Khabbab Brief explanation of Docker Compose and its purpose For multi-container Docker applications, Docker Compose is…

The post 🐘 Introduction to Docker Compose appeared first on ProdSens.live.

]]>
-introduction-to-docker-compose


refine repo

Author: Muhammad Khabbab

Brief explanation of Docker Compose and its purpose

For multi-container Docker applications, Docker Compose is a tool that is designed for application definition and execution. By enabling you to execute several container applications simultaneously from a single YAML file, Docker Compose finds a solution to the problem. Docker Compose is the best option for environments used for development, testing, and staging, as well as for quick integration processes. By utilizing Docker Compose, which enables you to store and version control your application stack in a file, you can also just make it possible for others to contribute to your project. Compose additionally allows the isolation of your application from the host environment and consistency across several instances.

Overview of the benefits and use cases of Docker Compose

Benefits of Docker Compose

Provide Reproducible Environments

It also allows you to easily create local environments that are identical to your production environment. Using these, you can test your applications and reduce instances of errors and unexpected behavior in production.

Ensure the Security of internal Container Networking

For internal communication, all the containers defined in the compose file are placed on the same internal network. This protects them from unauthorized access from the outside. It also simplifies the management of the multi-container application network.

Help in Scalability and High Availability

You can scale specific services inside your application using Docker Compose by defining the necessary number of container instances. This increases the capacity of your application and ensures high availability by dividing the workload among several instances.

Recreate containers only when they have changed

Compose caches the configuration used while building a container. Furthermore, when we resume a service that has not changed, Compose reuses the existing containers. Reusing containers in this context simply means that we can quickly change our environments.

Use Cases of Docker Compose

Development environments

Running applications and their dependencies in isolated environments is essential while developing software. For instance, you could be dependent on the application of another team, and that team may have its own set of complications, such as having to set up the database in a specific way. You can execute the entire stack or delete it using Compose with only one command.

Automated testing environments

Tools that make it simple to create and remove environments are often needed for automated workflows such as CI/CD pipelines. Compose offers an efficient approach for creating and destroying such environments for your test suite through the use of a configuration file.

Single host deployments

Even though Compose was primarily created for workflows in development and testing, it is sometimes utilized in production to run containers on a single host. Although Compose is becoming better, it still functions more as a wrapper over the Docker API than an orchestrator like Swarm or Kubernetes.

II. Installation and Setup

Instructions for installing Docker Compose on different operating systems

Installing Docker Desktop is the quickest and most convenient method to use Docker Compose. Along with Docker Engine and Docker CLI, which are necessary for Compose to function, Docker Desktop also contains Docker Compose. You can get Docker Desktop(Windows, Linux, MacOS) from the official Docker website().

docker volume

Verifying the installation

In order to verify the installation of Docker Compose, go to the terminal or command prompt of your OS and then run the command docker-compose --version. If the output of the command shows the version of Docker compose, then your installation is successful.

docker volume

Configuring Docker Compose for your environment

You have to create a Compose file that describes the services of your application in order to configure Docker Compose in your environment. A YAML file contains information on each service’s image, command, ports, volumes, environment variables, dependencies, and other settings.

Let’s suppose we have created a docker compose file(with yml extension) in our project directory, specify the components information, and this can be viewed and validated by navigating to the project directory from the terminal and then running docker-compose –config Command.

docker volume

III. Writing a Docker Compose File

Introduction to the Docker Compose YAML syntax

A variety of programming scenarios, including internet communications, object persistence, and, in the case of compose, configuration files, can employ the data serialization language YAML. The YAML syntax can be used to construct key-value pairs and hierarchical structures. Indentation is used to show how different parts and components are related to one another and how they work together. Inline comments are also supported in YAML because JSON does not allow inline comments, so YAML is significantly more suitable for describing Compose files.

Overview of the key sections in a Docker Compose file

Version

specifies which version of the Docker Compose file syntax should be used. A string is required as the version’s value. The string can contain both a major and minor version number, such as 3.9, or only a major version number, like 3, for example.

Services

The containers made for the services in your application are configured in the services mapping. Under the services key, a layered mapping is set for each service. Each service is capable of having whatever name you like. The services with the names web, db, and redis are shown in our example below:

docker volume

Network

You can specify the networks that link the services and enable communication between them; otherwise, Compose will automatically construct a new network using the application’s default bridge driver. The network’s name is derived from the name of the directory in which the Compose file is located.

Volume

The volume key mapping is utilized like that of docker volume create. Volumes can be referred to inside the services section in their configuration keys. Additionally, external volumes that were created by using the docker volume create command, or another compose file can also be declared.

Step-by-step guide on writing a basic Docker Compose file

Defining services

Under the services section, begin by listing the services or containers that make up your application. Similar to how you configure containers using docker run command arguments, you define the configuration options for service containers inside of the service configuration mapping.

docker volume

Specifying container images and versions

Choose the image for each service from a container registry, such as Docker Hub. Additionally, you can use the ‘latest‘ tag or specify a specific image version.

docker volume

Configuring or mapping volume settings

You need to specify the volume mapping between the host system and the container in order to configure the bind mount volume setting. It follows [host-path]:[container-path] format. The ‘./html‘ directory on the host system is mapped in the example below to the ‘/usr/share/nginx/html‘ directory in the web container. You can further configure or customize other volume settings as well, such as naming volumes or specifying read-only options for volumes, etc.

docker volume

Environment variables

Your containers can be dynamically configured without affecting the container images themselves. In the Example Below, The ‘environment’ variable for the database service of MySQL is defined. ‘MYSQL_ROOT_PASSWORD‘ is the name of the environment variable and ‘secret’ is the value that can dynamically change. This approach allows you to add several environment variables, with each variable appearing on a separate line and beginning with a hyphen (-).

docker volume

Exposing ports

You need to define the ports to expose on the host system and map them to the internal ports of the container using the ‘ports‘ key. In the example below, port ‘80‘ of the web container has been mapped to host system port ‘8080‘.

docker volume

IV. Managing Docker Compose Projects

Creating a Docker Compose project directory structure

  • The services (or containers) you want to execute as part of your application must be specified in a file called ‘docker-compose.yml‘ that must be created in your project root directory.

  • Depending on the requirements and demands of your application, you can also create a ‘services/‘ directory that contains all services and their related files like the Dockerfile, requirements.txt, and app.py.

  • Any persistent data directories that could be mounted as volumes in your Docker Compose services are kept in the ‘volume/‘ directory.

docker volume

B. Running Docker Compose commands

1. docker-compose up

For starting your application, you need to build and run all containers specified in the configuration file, and Docker compose will facilitate you for that purpose by just running one single command. Using the command line tool, you must go to the project root directory where the configuration file is placed and run the ‘docker-compose up‘ command. Please see the example below:

docker volume

2. docker-compose down

Now, if you want to stop your application and remove all containers or services that were created and launched using the configuration file in the previous command, Docker compose will also make this task easier by allowing you to accomplish it with just one command (i.e., “docker-compose down“) at the same location of the configuration file.
See the illustration below:

docker volume

3. docker-compose build

If you want to rebuild the images for each service defined in the YAML configuration file or If the Dockerfiles or build contexts have changed, then you need to run the command ‘docker-compose build‘ at the same location where your configuration file has been placed.

4. docker-compose start and docker-compose stop

If you want to start or stop your application, but you want to avoid removing the containers, Docker compose will help you achieve this purpose by using ‘docker-compose start‘ or ‘docker-compose stop‘ commands.

docker volume

Managing multiple Docker Compose environments (e.g., development, production)

It is possible to control how your application behaves in different environments by using environment variables with docker-compose. For instance, you can set up your development and production environments with various network settings, ports, or database credentials. To achieve this, you need to provide environment variables in the docker-compose.yml file by using the ‘environment’ or env_file keys for each service. Additionally, you can make separate configuration files for each environment, such as “docker-compose.production.yml,” “docker-compose.staging.yml” and “docker-compose.development.yml“. Once the environment-specific files are created, then you need to use the ‘-f‘ or ‘—file‘ option with ‘docker-compose‘ to use them for each environment. In the case of a staging environment, navigate to the staging configuration file location and then run the command ‘docker-compose -f docker-compose.staging.yml up’

Overriding Docker Compose configurations using environment-specific files

Once you have created multiple compose files for different configurations of different environments, docker-compose also offer you to override one configuration with another. Override files can hold different values for any configurations specified in the main ‘docker-compose.yml‘ file. Let’s suppose your application is currently running by using base configurations in the ‘docker-compose.yml’ file, and as a developer, you want to override development configurations for the development environment; for that purpose, you need to create another file(docker-compose.development.overide.xml) in the same directory that contains development specific values and run docker-compose command that explicitly specifies the file(use -f option) that needs to override on the base file. According to this example, the command that shows the applied configuration will be ‘docker-compose -f docker-compose.yml -f docker-compose.development.override.yml config

docker volume

V. Conclusion

Summary of key points covered in the guide

  1. The administration and orchestration of multi-container applications are made easier by Docker Compose.

  2. Using a YAML file, Docker Compose enables you to specify and set up your application services.

  3. In accordance with the settings described in the YAML file, it automates the creation of networks, volumes, and containers.

  4. For your application, Docker Compose offers advantages in reproducible environments, internal networking security, scalability, high availability, and caching configuration.

  5. Simple one-line Commands in Docker Compose, including up, down, build, start, and stop, assist in managing and controlling your containers.

  6. Using different YAML files, you can establish and switch to different Docker Compose environments (such as development, staging, testing, and production).

Encouragement for further exploration and experimentation with Docker Compose

  1. Research and practice more complex capabilities of Docker Compose, such as the use of external networks, health checks, or service dependencies.

  2. Learn more about other configuration choices and best practices by digging further into the resources and documentation for Docker Compose.

  3. Join online forums and groups dedicated to Docker and Docker Compose to share your knowledge and get insight from others.

  4. Stay up-to-date with the most recent Docker and Docker Compose releases and upgrades can help you continually advance your knowledge and abilities.

The post 🐘 Introduction to Docker Compose appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/10/16/%f0%9f%90%98-introduction-to-docker-compose/feed/ 0
Introducing NeoHaskell: A beacon of joy in a greyed tech world https://prodsens.live/2023/09/24/introducing-neohaskell-a-beacon-of-joy-in-a-greyed-tech-world/?utm_source=rss&utm_medium=rss&utm_campaign=introducing-neohaskell-a-beacon-of-joy-in-a-greyed-tech-world https://prodsens.live/2023/09/24/introducing-neohaskell-a-beacon-of-joy-in-a-greyed-tech-world/#respond Sun, 24 Sep 2023 15:24:49 +0000 https://prodsens.live/2023/09/24/introducing-neohaskell-a-beacon-of-joy-in-a-greyed-tech-world/ introducing-neohaskell:-a-beacon-of-joy-in-a-greyed-tech-world

Today, I’m proud to announce a (free and open-source) project that I’ve been working on for many years:…

The post Introducing NeoHaskell: A beacon of joy in a greyed tech world appeared first on ProdSens.live.

]]>
introducing-neohaskell:-a-beacon-of-joy-in-a-greyed-tech-world

Today, I’m proud to announce a (free and open-source) project that I’ve been working on for many years:

NeoHaskell, a dialect of Haskell that prioritizes newcomer-friendliness and productivity.

I embarked on the NeoHaskell project fueled by a diverse array of motivations. I see Haskell as a supremely potent language, leading the frontier of software development due to its inventive and meticulously crafted nature. However, its potential seems overshadowed by intricate details and a community primarily focused on resolving theoretical academic and mathematical challenges, often overlooking pragmatic solutions, which can be overwhelming for newcomers. Based on my experience with incorporating Haskell into production for two years and the subsequent transition to TypeScript due to escalating complexity, I recognized a critical need for a language that could refine software development while preserving the groundbreaking aspects of Haskell. This realization, coupled with the challenges encountered in the software development sector, inspired the creation of NeoHaskell. Through this initiative, I aspire to develop an optimal programming language and ecosystem that eradicates accidental complexity, either in mental form, or in code form.

This is a language for those who have just too many things in their head and they want to release their ideas, but don’t have the time to commit daily to maintain the required context of their projects in their heads. For those who want to play and have fun while producing awesome software, regardless of their experience, background, or interests. And for those teams that, inevitably, are high-rotational (either internally in the company, or externally) but want to be productive and happy while performing their craft.

A lighthouse beaming upwards a rainbow in a grayscale world

A language? A dialect? What do you mean?

Depending on who you ask, a programming language can be different things. If you ask the Haskell community, many will tell you that the language is the Haskell specification, and that what currently is being used is not Haskell itself, but an extension of Haskell that is supported by the GHC compiler. Similar to the C language, a programming language would be a specification.

Other people might tell you that a programming language is the result of the supported features of its compiler, like Rust, which keeps evolving with time.

I personally think that a programming language is the result of the agreement of its community on tools, libraries, patterns and practices. I hear folks saying “he didn’t learn Python, he is programming in Java in Python”, which is the affirmation of this.

NeoHaskell exploits this philosophy to establish a parallel universe in the Haskell ecosystem, piggybacking the extremely good tooling and libraries that are already there, and were there for a long time. It establishes completely different patterns, practices, naming, and even approaches to code that would be considered heretic in Haskell.

The solution is not having more hands implementing more features, the solution is to stop. And prioritize as if this was a product.

This is not a priority for the Haskell community, and that’s the point:

NeoHaskell is not Haskell, it’s a parallel universe built on top of it.

a crowd of people celebrating inside of a colorful bubble, in the middle of a grayscale crowd

What’s the goal?

NeoHaskell’s core mission is to provide a seamless, productive ecosystem for joyful software development. It’s designed to guide developers toward faster, more reliable releases, through a concentrated effort to alleviate cognitive load, by removing unnecessary complexity and decision making.

We welcome developers from diverse backgrounds, including backend and frontend development, data analysis, mobile development, game development, and whatever you can image. While simplification of the development process is a priority, NeoHaskell also emphasizes the importance of improving developer satisfaction by incorporating structured architectures to reduce decision making even more.

NeoHaskell’s vision is to render the development process more user-friendly and enjoyable, enabling users to realize their ideas effortlessly and efficiently. It wants to transform the development experience, making it feel more like “play” rather than “work.”

a photography a colorful construction worker playing on a swing, a group of construction workers look at him with their arms crossed, everything in grayscale but the construction worker on the swing

Why not just documenting Haskell?

If you think in a programming language as the subconscious agreement of its community, the key part is the way of working and approaching programming as a group. In the same way that the Java community is used to work in a way by defining factories, beans, etc… The Haskell community is used to work in their own way too. It is not wrong, or incorrect, they have their own goals.

The issue is, that documenting technology is easy, changing the minds of an entire community is not.

I’m a single person trying to fix the world. I have limited time and energy, and I don’t want to be changing the views of anyone. I’m doing this in my free time, time that I dedicate to be more happy. Convincing people doesn’t make me happy.

I’m building a wagon that is open for everyone who wants to hop on and support it, but I don’t want to deal with people that don’t.

I want to establish a new concept so it is easier for newcomers to find resources and get up and running in the most effective way possible, all while feeling unstoppable because they’re using a tool that they love.

In response to any skepticism from the Haskell community, I understand the seen sufficiency of Haskell in its present state given their context and experience. Most of them are very smart people, and great professionals, and they are completely right to think in that way.

However, when considering individuals with diverse approaches to software development, like those viewing programming languages as tools, or those valuing release speed over perfection, including early-career developers with abundant creative ideas, the priorities noticeably shift.

NeoHaskell is focused on enabling swift and assured releases, valuing user-friendliness and practicality over aspects like precision, code accuracy, and complex type-level code for the sake of perfection. The project recognizes that developers have varied needs, and it commits to catering to those prioritizing efficiency and feasibility in bringing their visions to life.

a photography of a two colorful persons collaborating in the middle of a grayscale office

What are you trying to solve exactly?

There are too many entry points to a simple task as running your code, such as plain GHC, Cabal, Stack, Nix, etc., each presenting its learning curve and absence of a recognized superior option. The environment is further compounded by multiple language extensions and different frameworks handling side effects. These, combined with the daunting mathematical concepts in the language, often discourage newcomers.

Not even mentioning even more niche workflows like mobile apps, web apps, and even game development, that are there, but are hidden deep down under layers of accidental complexity.

NeoHaskell counters these challenges by introducing a unified CLI tool that simplifies everything, from installing the required compiler to managing packages and running tests. It streamlines all the different workflows through official templates, so one can get started right away with the task that they want to do.

We aim to support newcomers by providing analogies to other languages and featuring these in the documentation to facilitate comprehensive learning, while renaming or discarding terms that would be unnecessarily complex for newcomers.

a photography of a colorful computer in the middle of a grayscale computer scrapyard

How are you trying to solve it?

NeoHaskell’s development is centered around the principle of catering to its community, the NeoHaskell community, avoiding the trap of chasing unrealistic perfection and ending up with an unused product after years of development.

The main goal is to create many different pieces:

  • A remarkable standard library
  • An integrated CLI tool with precise error messages
  • Templates with planned architectures
  • Documentation with a set of recipes
  • Mobile app packaging
  • Python interoperability
  • And many more

These objectives, though ambitious, are attainable by delivering small increments and focusing on fostering one essential element: developer joy.

Rather than promoting the creation of new architectures and the innovative use of advanced features, NeoHaskell favors well-established patterns in the software industry and embraces functional programming where appropriate, allowing for imperative code where it’s more suitable. Comprehensive documentation, a supportive Discord community, and a philosophy of “if it takes more than 15 mins to figure out, it is a bug” are the key points to ensure the success and happiness of its users.

One of the key ideas of NeoHaskell is to reuse as much technology as possible, while staying in the realm of NeoHaskell. This approach opens up many paths, and leads you very far. Our goal is to be happy while coding what makes our project different, not all the bullshit that just gets in the way. If your goal is to make a todo-list mobile application in the most joyful way, using NeoHaskell, you shouldn’t think at all in how the rendering of the view is actually implemented. Even if the mobile app is a React Native renderer of HyperView XML views that are retrieved from a 100% NeoHaskell backend, as shown in this tweet:

// Detect dark theme
var iframe = document.getElementById(‘tweet-1700859705021571543-47’);
if (document.body.className.includes(‘dark-theme’)) {
iframe.src = “https://platform.twitter.com/embed/Tweet.html?id=1700859705021571543&theme=dark”
}

When you approach development like this, by doing concessions, instead of closing doors, you open up the door for innovation and improvement, while maintaining a healthy and productive ecosystem. The other way around, you end up with a wasteland filled of gardeners ensuring that someday, the plants will grow exactly in the perfect way that they want.

a photography of a colorful flower in the middle of a grayscale desert

How does it piggy-back the Haskell ecosystem?

NeoHaskell preserves 100% compatibility with Haskell, seamlessly integrating existing packages and tools designed for Haskell. However, its goal is not to be backward compatible. It will refine several elements from the Haskell standard library, with functions being renamed, certain operators being concealed and new ones introduced, and type classes being restructured to ensure their practical applicability, if they are applicable.

NeoHaskell will actively utilize compiler plugins to modify the compiler’s behavior to align more with newcomer friendly messages and features. It will maximize the potential of existing libraries, sometimes through encapsulation or re-exportation, as well as the usage of all the plethora of great CLI tooling that exists for Haskell.

two colorful people walking one besides another in a grayscale intrincate ornament

How will be the design and priorization process?

I believe that in order for a product to be successful, the design process must be centralized in a single person. This person must listen to the users, the other designers, and in general must have an open mind to always cherry-pick all possible ideas in order to improve the product. I don’t believe that a product should be guided by democracy, and neither it should implement all suggestions by every user. In other words, I’ll be the one in charge of generating and listening to discussions, and prioritizing the features of the project.

I understand that this comes with some risk, but at the same time I believe that all programming tools like Python and Ruby that are very loved by their communities are like that because of the BDFL model.

NeoHaskell’s development will be open and transparent, marked by continuous sharing of progress and regular updates. My biggest priority is to maintain the community informed and aware of my vision, not getting into some kind of spiritual retreat to code while everyone thinks that the project has stalled and I have disappeared.

Anyone interested can contribute by participating in discussions on Discord and GitHub. Contributions go beyond code, the most important part is the design process, and that can only be achieved through communication between the members of the community.

a colorful photography of a person making a drawing, with a crowd cheering on the background

I want to contribute, where is the code?

If you’ve read “the project that I’ve been working on for many years” as “the project that I’ve been coding for many years”, I’m sorry to disappoint you, that is not the case.

I’m glad that it isn’t, because that would have been a complete mistake. Design is a thinking process, is a communication process that goes back and forth. It is about talking to people, listening to every conversation with an open mind. Getting into tasks that you don’t like and doing them for the sake of understanding what’s going on, what are the users doing, and what is this thing that could improve their lives.

I didn’t code NeoHaskell, instead I wrote TypeScript, both for backend, frontend, CLI tools, infra-as-code, and many more things. I worked with enterprise clients whose teams are so big and the team member rotation is so high, that every little code pattern had to be stitched in the exact same way so there were no surprises when team members rotated. I coded plain Java, as well as Python, in the most boring and bland way possible. All of that, taking notes and gathering insights for NeoHaskell. That is the real work. Getting all these puzzle pieces that are spinning in a tornado around you, picking them up, while attempting to stay on the ground and assembling the puzzle.

There’s this thing that people call “Haskell curse”, which is the feeling of disgust for using other, inferior, technologies that aren’t Haskell.

NeoHaskell lifts the curse. Every interaction with technology and users from other ecosystems becomes a blessing, and an opportunity to understand what makes us happy as software craftspeople. Every frustration becomes a leverage point to create the most awesome language, ecosystem, and community ever in the world. So, check out the website, join the Discord, and hop into the project’s GitHub, this is not possible without you (literally).

a colorful photography of a wiccan magician in a spell circle performing magic

Afterword

Writing this post was not easy at all. I’m very happy that this project finally sees the light, because I truly believe since the day I discovered Haskell in that old dusty book in my faculty’s library, that it has the power to be the leading language in software industry. You cannot imagine how many different attempts that I’ve tried, how many prototypes I actually wrote, and how many conversations I had with developers who thought that Haskell is just hipster bullshit that is not useful at all.

If you’ve read this far, let me thank you from the deepest part of my heart, even if you don’t believe in this project, or you’re not interested, and if you are, have this huge virtual hug and let’s chat! I’m eager to talk with people that are interested in this idea.

NeoHaskell may evolve as a specialized community or may blossom into something more extensive and mainstream.

My hopes are humble: for it to serve as an invaluable resource enabling the community to experience continual satisfaction, and for it to encourage the Haskell community to emphasize practicality and inclusivity.

a colorful close up photography of two hands holding the heart emoji

Thank you, really,
NickSeagull

The post Introducing NeoHaskell: A beacon of joy in a greyed tech world appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/24/introducing-neohaskell-a-beacon-of-joy-in-a-greyed-tech-world/feed/ 0
Authentication system using Golang and Sveltekit – User registration https://prodsens.live/2023/06/04/authentication-system-using-golang-and-sveltekit-user-registration/?utm_source=rss&utm_medium=rss&utm_campaign=authentication-system-using-golang-and-sveltekit-user-registration https://prodsens.live/2023/06/04/authentication-system-using-golang-and-sveltekit-user-registration/#respond Sun, 04 Jun 2023 02:25:25 +0000 https://prodsens.live/2023/06/04/authentication-system-using-golang-and-sveltekit-user-registration/ authentication-system-using-golang-and-sveltekit-–-user-registration

Introduction With the basic setup laid bare, it’s time to build a truly useful API service for our…

The post Authentication system using Golang and Sveltekit – User registration appeared first on ProdSens.live.

]]>
authentication-system-using-golang-and-sveltekit-–-user-registration

Introduction

With the basic setup laid bare, it’s time to build a truly useful API service for our authentication system. In this article, we will delve into user registration, storage in the database, password hashing using argon2id, sending templated emails, and generating truly random and secure tokens, among others. Let’s get on!

Source code

The source code for this series is hosted on GitHub via:

GitHub logo

Sirneij
/
go-auth

A fullstack session-based authentication system using golang and sveltekit

go-auth

Implementation

Step 1: Create the user’s database schema

We need a database table to store our application’s users’ data. To generate and migrate a schema, we’ll use golang migrate. Kindly follow these instructions to install it on your Operating system. To create a pair of migration files (up and down) for our user table, issue the following command in your terminal and at the root of your project:

~/Documents/Projects/web/go-auth/go-auth-backend$ migrate create -seq -ext=.sql -dir=./migrations create_users_table

-seq instructs the CLI to use sequential numbering as against the default, which is the Unix timestamp. We opted to use .sql file extensions for the generated files by passing -ext. The generated files will live in the migrations folder we created in the previous article and -dir allows us to specify that. Lastly, we fed it with the real name of the files we want to create. You should see two files in the migrations folder by name. Kindly open the up and fill in the following schema:

-- migrations/000001_create_users_table.up.sql
-- Add up migration script here
-- User table
CREATE TABLE IF NOT EXISTS users(
    id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
    email TEXT NOT NULL UNIQUE,
    password TEXT NOT NULL,
    first_name TEXT NOT NULL,
    last_name TEXT NOT NULL,
    is_active BOOLEAN DEFAULT FALSE,
    is_staff BOOLEAN DEFAULT FALSE,
    is_superuser BOOLEAN DEFAULT FALSE,
    thumbnail TEXT NULL,
    date_joined TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS users_id_email_is_active_indx ON users (id, email, is_active);
-- Create a domain for phone data type
CREATE DOMAIN phone AS TEXT CHECK(
    octet_length(VALUE) BETWEEN 1
    /*+*/
    + 8 AND 1
    /*+*/
    + 15 + 3
    AND VALUE ~ '^+d+$'
);
-- User details table (One-to-one relationship)
CREATE TABLE user_profile (
    id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
    user_id UUID NOT NULL UNIQUE,
    phone_number phone NULL,
    birth_date DATE NULL,
    github_link TEXT NULL,
    FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS users_detail_id_user_id ON user_profile (id, user_id);

In the down file, we should have:

-- migrations/000001_create_users_table.down.sql
-- Add down migration script here
DROP TABLE IF EXISTS users;
DROP TABLE IF EXISTS user_profile;

We have been using these schemas right from when we started the authentication series.

Next, we need to execute the files so that those tables will be really created in our database:

migrate -path=./migrations -database= up

Ensure you replace with your real database URL. If everything goes well, your table should now be created in your database.

It should be noted that instead of manually migrating the database, we could do that automatically, at start-up, in the main() function.

Step 2: Setting up our user model

To abstract away interacting with the database, we will create some sort of model, an equivalent of Django’s model. But before then, let’s create a type for our users in internal/data/user_types.go (create the file as it doesn’t exist yet):

// internal/data/user_types.go
package data

import (
    "database/sql"
    "errors"
    "time"

    "github.com/google/uuid"
    "goauthbackend.johnowolabiidogun.dev/internal/types"
)

type UserProfile struct {
    ID          *uuid.UUID     `json:"id"`
    UserID      *uuid.UUID     `json:"user_id"`
    PhoneNumber *string        `json:"phone_number"`
    BirthDate   types.NullTime `json:"birth_date"`
    GithubLink  *string        `json:"github_link"`
}

type User struct {
    ID          uuid.UUID   `json:"id"`
    Email       string      `json:"email"`
    Password    password    `json:"-"`
    FirstName   string      `json:"first_name"`
    LastName    string      `json:"last_name"`
    IsActive    bool        `json:"is_active"`
    IsStaff     bool        `json:"is_staff"`
    IsSuperuser bool        `json:"is_superuser"`
    Thumbnail   *string     `json:"thumbnail"`
    DateJoined  time.Time   `json:"date_joined"`
    Profile     UserProfile `json:"profile"`
}

type password struct {
    plaintext *string
    hash      string
}

type UserModel struct {
    DB *sql.DB
}

type UserID struct {
    Id uuid.UUID
}

var (
    ErrDuplicateEmail = errors.New("duplicate email")
)

These are just the basic types we’ll be working on within this system. You will notice that there are three columns: names of the fields, field types, and the “renames” of the fields in JSON. The last column is very useful because, in Go, field names MUST start with capital letters for them to be accessible outside their package. The same goes to type names. Therefore, we need a way to properly send field names to requesting users and Go helps with that using the built-in encoding/json package. Notice also that our Password field was renamed to -. This omits that field entirely from the JSON responses it generates. How cool is that! We also defined a custom password type. This makes it easier to generate the hash of our users’ passwords.

Then, there is this not-so-familiar types.NullTime in the UserProfile type. It was defined in internal/types/time.go:

// internal/types/time.go
package types

import (
    "fmt"
    "reflect"
    "strings"
    "time"

    "github.com/lib/pq"
)

// NullTime is an alias for pq.NullTime data type
type NullTime pq.NullTime

// Scan implements the Scanner interface for NullTime
func (nt *NullTime) Scan(value interface{}) error {
    var t pq.NullTime
    if err := t.Scan(value); err != nil {
        return err
    }

    // if nil then make Valid false
    if reflect.TypeOf(value) == nil {
        *nt = NullTime{t.Time, false}
    } else {
        *nt = NullTime{t.Time, true}
    }

    return nil
}

// MarshalJSON for NullTime
func (nt *NullTime) MarshalJSON() ([]byte, error) {
    if !nt.Valid {
        return []byte("null"), nil
    }
    val := fmt.Sprintf(""%s"", nt.Time.Format(time.RFC3339))
    return []byte(val), nil
}

const dateFormat = "2006-01-02"

// UnmarshalJSON for NullTime
func (nt *NullTime) UnmarshalJSON(b []byte) error {
    t, err := time.Parse(dateFormat, strings.Replace(
        string(b),
        """,
        "",
        -1,
    ))

    if err != nil {
        return err
    }

    nt.Time = t
    nt.Valid = true

    return nil
}

The reason for this is the difficulty encountered while working with possible null values for users’ birthdates. This article explains it quite well and the code above was some modification of the code there.

It should be noted that to use UUID in Go, you need an external package (we used github.com/google/uuid in our case, so install it with go get github.com/google/uuid).

Next is handling password hashing:

// internal/data/user_password.go
package data

import (
    "log"

    "github.com/alexedwards/argon2id"
)

func (p *password) Set(plaintextPassword string) error {
    hash, err := argon2id.CreateHash(plaintextPassword, argon2id.DefaultParams)
    if err != nil {
        return err
    }
    p.plaintext = &plaintextPassword
    p.hash = hash
    return nil
}

func (p *password) Matches(plaintextPassword string) (bool, error) {
    match, err := argon2id.ComparePasswordAndHash(plaintextPassword, p.hash)
    if err != nil {
        log.Fatal(err)
    }

    return match, nil
}

We used github.com/alexedwards/argon2id package to assist in hashing and matching our users’ passwords. It’s Go’s implementation of argon2id. The Set “method” does the hashing when a user registers whereas Matches confirms it when such a user wants to log in.

To validate users’ inputs, a very go thing to do, we have:

// internal/data/user_validation.go
package data

import "goauthbackend.johnowolabiidogun.dev/internal/validator"

func ValidateEmail(v *validator.Validator, email string) {
    v.Check(email != "", "email", "email must be provided")
    v.Check(validator.Matches(email, validator.EmailRX), "email", "email must be a valid email address")
}

func ValidatePasswordPlaintext(v *validator.Validator, password string) {
    v.Check(password != "", "password", "password must be provided")
    v.Check(len(password) >= 8, "password", "password must be at least 8 bytes long")
    v.Check(len(password) <= 72, "password", "password must not be more than 72 bytes long")
}

func ValidateUser(v *validator.Validator, user *User) {
    v.Check(user.FirstName != "", "first_name", "first name must be provided")
    v.Check(user.LastName != "", "last_name", "last name must be provided")

    ValidateEmail(v, user.Email)
    // If the plaintext password is not nil, call the standalone // ValidatePasswordPlaintext() helper.
    if user.Password.plaintext != nil {
        ValidatePasswordPlaintext(v, *user.Password.plaintext)
    }
}

The code uses another custom package to validate email, password, first name, and last name — the data required during registration. The custom package looks like this:

// internal/validator/validator.go
package validator

import "regexp"

var EmailRX = regexp.MustCompile("^[a-zA-Z0-9.!#$%&'*+\/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$")

type Validator struct {
    Errors map[string]string
}

// New is a helper which creates a new Validator instance with an empty errors map.
func New() *Validator {
    return &Validator{Errors: make(map[string]string)}
}

// Valid returns true if the errors map doesn't contain any entries.
func (v *Validator) Valid() bool {
    return len(v.Errors) == 0
}

// AddError adds an error message to the map (so long as no entry already exists for // the given key).
func (v *Validator) AddError(key, message string) {
    if _, exists := v.Errors[key]; !exists {
        v.Errors[key] = message
    }
}

// Check adds an error message to the map only if a validation check is not 'ok'.
func (v *Validator) Check(ok bool, key, message string) {
    if !ok {
        v.AddError(key, message)
    }
}

// In returns true if a specific value is in a list of strings.
func In(value string, list ...string) bool {
    for i := range list {
        if value == list[i] {
            return true
        }
    }
    return false
}

// Matches returns true if a string value matches a specific regexp pattern.
func Matches(value string, rx *regexp.Regexp) bool {
    return rx.MatchString(value)
}

// Unique returns true if all string values in a slice are unique.
func Unique(values []string) bool {
    uniqueValues := make(map[string]bool)
    for _, value := range values {
        uniqueValues[value] = true
    }
    return len(values) == len(uniqueValues)
}

Pretty easy to reason along with.

It’s finally time to create the model:

// internal/data/models.go
package data

import (
    "database/sql"
    "errors"
)

var (
    ErrRecordNotFound = errors.New("a user with these details was not found")
)

type Models struct {
    Users UserModel
}

func NewModels(db *sql.DB) Models {
    return Models{
        Users: UserModel{DB: db},
    }
}

With this, if we have another model, all we need to do is register it in Models and initialize it in NewModels.

Now, we need to make this model accessible to our application. To do this, add models to our application type in main.go and initialize it inside the main() function:

// cmd/api/main.go
...

import (
    ...
    "goauthbackend.johnowolabiidogun.dev/internal/data"
    ...
)
type application struct {
    ..
    models      data.Models
    ...
}

func main() {
    ...
    app := &application{
        ...
        models:      data.NewModels(db),
        ...
    }
    ...
}
...

That makes the models available to all route handlers and functions that implement the application type.

Step 3: User registration route handler

Let’s put housekeeping to good use. Create a new file, register.go, in cmd/api and make it look like this:

// cmd/api/register.go
package main

import (
    "errors"
    "net/http"
    "time"

    "goauthbackend.johnowolabiidogun.dev/internal/data"
    "goauthbackend.johnowolabiidogun.dev/internal/tokens"
    "goauthbackend.johnowolabiidogun.dev/internal/validator"
)

func (app *application) registerUserHandler(w http.ResponseWriter, r *http.Request) {
    // Expected data from the user
    var input struct {
        Email     string `json:"email"`
        FirstName string `json:"first_name"`
        LastName  string `json:"last_name"`
        Password  string `json:"password"`
    }
    // Try reading the user input to JSON
    err := app.readJSON(w, r, &input)
    if err != nil {

        app.badRequestResponse(w, r, err)
        return
    }

    user := &data.User{
        Email:     input.Email,
        FirstName: input.FirstName,
        LastName:  input.LastName,
    }

    // Hash user password
    err = user.Password.Set(input.Password)
    if err != nil {

        app.serverErrorResponse(w, r, err)
        return
    }

    // Validate the user input
    v := validator.New()
    if data.ValidateUser(v, user); !v.Valid() {
        app.failedValidationResponse(w, r, v.Errors)
        return
    }

    // Save the user in the database
    userID, err := app.models.Users.Insert(user)
    if err != nil {
        switch {
        case errors.Is(err, data.ErrDuplicateEmail):
            v.AddError("email", "A user with this email address already exists")
            app.failedValidationResponse(w, r, v.Errors)
        default:
            app.serverErrorResponse(w, r, err)
        }
        return
    }

    // Generate 6-digit token
    otp, err := tokens.GenerateOTP()
    if err != nil {
        app.logError(r, err)
    }

    err = app.storeInRedis("activation_", otp.Hash, userID.Id, app.config.tokenExpiration.duration)
    if err != nil {
        app.logError(r, err)
    }

    now := time.Now()
    expiration := now.Add(app.config.tokenExpiration.duration)
    exact := expiration.Format(time.RFC1123)

    // Send email to user, using separate goroutine, for account activation
    app.background(func() {
        data := map[string]interface{}{
            "token":       tokens.FormatOTP(otp.Secret),
            "userID":      userID.Id,
            "frontendURL": app.config.frontendURL,
            "expiration":  app.config.tokenExpiration.durationString,
            "exact":       exact,
        }
        err = app.mailer.Send(user.Email, "user_welcome.tmpl", data)
        if err != nil {
            app.logError(r, err)
        }
        app.logger.PrintInfo("Email successfully sent.", nil, app.config.debug)
    })

    // Respond with success
    app.successResponse(
        w,
        r,
        http.StatusAccepted,
        "Your account creation was accepted successfully. Check your email address and follow the instruction to activate your account. Ensure you activate your account before the token expires",
    )
}

Though a bit long, reading through the lines gives you the whole idea! We expect four (4) fields from the user. After converting them to proper JSON using readJSON, a method created previously, we initialized the User type, set hash the supplied password and then validate the user-supplied data. If everything is good, we used Insert, a method on the User type that lives in internal/data/user_queries.go, to save the user in the database. The method is simple:

// internal/data/user_queries.go
package data

import (
    "context"
    "database/sql"
    "errors"
    "log"
    "time"

    "github.com/google/uuid"
)

func (um UserModel) Insert(user *User) (*UserID, error) {
    ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
    defer cancel()

    tx, err := um.DB.BeginTx(ctx, nil)
    if err != nil {
        return nil, err
    }

    var userID uuid.UUID

    query_user := `
    INSERT INTO users (email, password, first_name, last_name) VALUES ($1, $2, $3, $4) RETURNING id`
    args_user := []interface{}{user.Email, user.Password.hash, user.FirstName, user.LastName}

    if err := tx.QueryRowContext(ctx, query_user, args_user...).Scan(&userID); err != nil {
        switch {
        case err.Error() == `pq: duplicate key value violates unique constraint "users_email_key"`:
            return nil, ErrDuplicateEmail
        default:
            return nil, err
        }

    }

    query_user_profile := `
    INSERT INTO user_profile (user_id) VALUES ($1) ON CONFLICT (user_id) DO NOTHING RETURNING user_id`

    _, err = tx.ExecContext(ctx, query_user_profile, userID)

    if err != nil {
        return nil, err
    }

    if err = tx.Commit(); err != nil {
        return nil, err
    }
    id := UserID{
        Id: userID,
    }

    return &id, nil
}

We used Go’s database transaction to execute our SQL queries. We also provided 3 seconds timeout for our database to finish up or get timed out! If the insertion query is successful, the user’s ID is returned.

Next, we generated a token for the new user. The token is a random and cryptographically secure 6-digit number which then gets encoded using the sha252 algorithm. The entire logic is:

// internal/tokens/utils.go
package tokens

import (
    "crypto/rand"
    "crypto/sha256"
    "fmt"
    "math/big"

    "strings"

    "goauthbackend.johnowolabiidogun.dev/internal/validator"
)

type Token struct {
    Secret string
    Hash   string
}

func GenerateOTP() (*Token, error) {
    bigInt, err := rand.Int(rand.Reader, big.NewInt(900000))
    if err != nil {
        return nil, err
    }
    sixDigitNum := bigInt.Int64() + 100000

    // Convert the integer to a string and get the first 6 characters
    sixDigitStr := fmt.Sprintf("%06d", sixDigitNum)

    token := Token{
        Secret: sixDigitStr,
    }

    hash := sha256.Sum256([]byte(token.Secret))

    token.Hash = fmt.Sprintf("%xn", hash)

    return &token, nil
}

func FormatOTP(s string) string {
    length := len(s)
    half := length / 2
    firstHalf := s[:half]
    secondHalf := s[half:]
    words := []string{firstHalf, secondHalf}
    return strings.Join(words, " ")
}

func ValidateSecret(v *validator.Validator, secret string) {
    v.Check(secret != "", "token", "must be provided")
    v.Check(len(secret) == 6, "token", "must be 6 bytes long")
}

After the token generation, we temporarily store the token hash in redis using the storeInRedis method and then send an email, in the background using a different goroutine, to the user with instructions on how to activate their accounts. The functions used are located in cmd/api/helpers.go:

// cmd/api/helpers.go

...
func (app *application) storeInRedis(prefix string, hash string, userID uuid.UUID, expiration time.Duration) error {
    ctx := context.Background()
    err := app.redisClient.Set(
        ctx,
        fmt.Sprintf("%s%s", prefix, userID),
        hash,
        expiration,
    ).Err()
    if err != nil {
        return err
    }

    return nil
}

func (app *application) background(fn func()) {
    app.wg.Add(1)

    go func() {

        defer app.wg.Done()
        // Recover any panic.
        defer func() {
            if err := recover(); err != nil {
                app.logger.PrintError(fmt.Errorf("%s", err), nil, app.config.debug)
            }
        }()
        // Execute the arbitrary function that we passed as the parameter.
        fn()
    }()
}

The tokens expire and get deleted from redis after TOKEN_EXPIRATION has elapsed.

I think we should stop here as this article is getting pretty long. In the next one, we will implement missing methods, configure our app for email sending and implement activating users’ accounts handler. Enjoy!

Outro

Enjoyed this article? Consider contacting me for a job, something worthwhile or buying a coffee ☕. You can also connect with/follow me on LinkedIn and Twitter. It isn’t bad if you help share this article for wider coverage. I will appreciate it…

The post Authentication system using Golang and Sveltekit – User registration appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/06/04/authentication-system-using-golang-and-sveltekit-user-registration/feed/ 0
The best TRAIT of RUST 🔥 (no pun intended) https://prodsens.live/2023/05/05/the-best-trait-of-rust-%f0%9f%94%a5-no-pun-intended/?utm_source=rss&utm_medium=rss&utm_campaign=the-best-trait-of-rust-%25f0%259f%2594%25a5-no-pun-intended https://prodsens.live/2023/05/05/the-best-trait-of-rust-%f0%9f%94%a5-no-pun-intended/#respond Fri, 05 May 2023 05:05:42 +0000 https://prodsens.live/2023/05/05/the-best-trait-of-rust-%f0%9f%94%a5-no-pun-intended/ the-best-trait-of-rust-(no-pun-intended)

Introduction So, your friend can’t stop about how they use Rust. Perhaps they’ve mentioned how it’s safe, has…

The post The best TRAIT of RUST 🔥 (no pun intended) appeared first on ProdSens.live.

]]>
the-best-trait-of-rust-(no-pun-intended)

Introduction

So, your friend can’t stop about how they use Rust. Perhaps they’ve mentioned how it’s safe, has great memory management or how its BLAZINGLY fast. Although this is clearly true, there is another great reason to use rust: its use of traits.

Never heard of traits? well its your lucky day.

What is a trait?

Do you remember when you first learnt inheritance and it seamed like this amazing idea. So you create an animal class that allows for a noise method and an eat method. You then you create a cat and a dog who have these methods. You then want a robot who speaks but doesn’t eat. Well now you need a speaker class that the animal class extends and the robot class extends. Suddenly, when you realize that the cat and the robot both need to catch mice, you find yourself struggling to figure out what to do. Everything you were promised about inheritance seems like a lie.

Fortunately, there is another approach to this problem called composition. Instead of a cat being an animal, a cat is an entity composed of the traits speaker, eater, and mouse hunter. The robot is composed of the speaker and mouse hunter traits. With composition, you can create more flexible and reusable code that adapts to different contexts. This is why traits are super useful in rust.

Now, you are probably thinking that this already exists in other languages like java and c#. But there are a few nifty tricks that traits have in rust that do not exist in these other languages. We will see them a bit later.

The Vector Example

An example that I personally think shows off traits the most is vectors. The math kind not the array kind.

Creating a Vec2 Struct

Lets first define a Vec2 struct with a generic type for the x and y attributes.

struct Vec2<T> {
    x: T,
    y: T
}

Printing

Now in our main lets create a Vec2 and print it.

fn main() {
    let v1 = Vec2 {
        x: 1,
        y: 2,
    };

    println!("{}", v1);
}

You will quickly realize this doesn’t work when the compiler throws the error doesn't implement 'std::fmt::Display'. This happens because the Display trait must be implemented for the type that you are printing.
A quick solution to this will be to print using the Debug trait instead. This is because unlike the Display trait you can derive the Debug trait. This essentially a macro (or code generated at compile time) that will automatically implement the trait for you. This can be done by adding #[derive(Debug, Copy, Clone)] above the struct. Note: “{:?}” is used for debug in the print macro.

#[derive(Debug)]
struct Vec2<T> {
    x: T,
    y: T
}

fn main() {
    let v1 = Vec2 {
        x: 1,
        y: 2,
    };

    println!("{:?}", v1);
}

Implementing traits for existing types

So now lets say we want to be able to convert an i32 (32 bit integer) to a Vec2. One way we can do this is by creating a ToVec2 trait that must derive a to_vec2 function.

trait ToVec2<T> {
    fn to_vec2(&self) -> Vec2<T>;
}

impl ToVec2<i32> for i32 {
    fn to_vec2(&self) -> Vec2<i32> {
        Vec2 { 
            x: *self,
            y: *self 
        }
    }
}

This now means we can call the method to_vec2 on any variable with the i32 type. This is something that interfaces cannot do in other languages and why traits are so powerful in rust.
We can now modify the main function to show this working in action.

fn main() {
    let v1 = Vec2 {
        x: 1,
        y: 2,
    };

    let v2 = 8.to_vec2();

    println!("{:?}, {:?}", v1, v2);
}

Operation overloading

Another use case for traits is operation overloading. This is where you have a struct or class and wish to allow the user to use an operator such as ‘+’ to add the two values together.
Lets now implement the Add trait for Vec2. To do this we will first need to insure that the generic used for x and y also have this trait so they can be added to the other vectors x and y values. There are 2 types of syntax we can use for this.
The first way is to add a bound in the angle brackets. This is good for when your adding one or two traits.

use std::ops::Add;

#[derive(Debug)]
struct Vec2<T: Add> {
    x: T,
    y: T
}

But lets say we wish to have multiple traits. In this case lets add the Copy and Clone traits too. We can now use the where keyword as follows.

use std::ops::Add;

#[derive(Debug)]
struct Vec2<T> 
where
    T: Add
       + Clone
       + Copy
{
    x: T,
    y: T
}

We need to also make sure that these bounds exist on the ToVec2 trait since it also uses this generic.

trait ToVec2<T>
where
    T: Add
       + Clone
       + Copy
{
    fn to_vec2(&self) -> Vec2<T>;
}

Now we need to implement Add for Vec2. To do this we will need to define the output type of type Vec2 and also we need to make sure the Add bound for T is Add since The output of the addition must also have the traits required for a Vec2.

impl<T> Add for Vec2<T>
where
    T: Add<Output = T>
       + Clone
       + Copy
{
    type Output = Vec2<T>;

    fn add(self, rhs: Self) -> Self::Output {
        Vec2 {
            x: self.x + rhs.x, 
            y: self.y + rhs.y
        }
    }
}

Before we test this me need to make sure that Vec2 also derives the Copy and Clone to prevent issues related to borrowing.

#[derive(Debug, Clone, Copy)]
struct Vec2<T> 

Finally we can test if the addition works by updating our main function.

fn main() {
    let v1 = Vec2 {
        x: 1,
        y: 2,
    };

    let v2 = 8.to_vec2();

    let v3 = v1 + v2;

    println!("{:?} + {:?} = {:?}", v1, v2, v3);
}

IT WORKS!!!

Challenge

Try and implement more operations for this struct and implement the ToVec2 trait for more types. If you would like to do your own further reading maybe try to replace the ToVec2 for the Into and From traits.

Conclusion

We’ve explored the issues with inheritance and how trait based composition can help this by using rust. We’ve looked at how traits are derived, used for operation overloading and implemented for already existing types. Hopefully this is a compelling reason to use fast apart from its BLAZINGLY FAST performance.

Thanks for reading!

P.S. If you share, comment and like this post I will show how we can one up this Vec2 struct with macros to do the work for us.

The post The best TRAIT of RUST 🔥 (no pun intended) appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/05/05/the-best-trait-of-rust-%f0%9f%94%a5-no-pun-intended/feed/ 0
how to store html form data in mysql database using java https://prodsens.live/2023/04/25/how-to-store-html-form-data-in-mysql-database-using-java/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-store-html-form-data-in-mysql-database-using-java https://prodsens.live/2023/04/25/how-to-store-html-form-data-in-mysql-database-using-java/#respond Tue, 25 Apr 2023 10:02:27 +0000 https://prodsens.live/2023/04/25/how-to-store-html-form-data-in-mysql-database-using-java/ how-to-store-html-form-data-in-mysql-database-using-java

The post how to store html form data in mysql database using java appeared first on ProdSens.live.

]]>
how-to-store-html-form-data-in-mysql-database-using-java

The post how to store html form data in mysql database using java appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/04/25/how-to-store-html-form-data-in-mysql-database-using-java/feed/ 0
Pricing and packaging: Working as a cross-functional team https://prodsens.live/2023/04/21/pricing-and-packaging-working-as-a-cross-functional-team/?utm_source=rss&utm_medium=rss&utm_campaign=pricing-and-packaging-working-as-a-cross-functional-team https://prodsens.live/2023/04/21/pricing-and-packaging-working-as-a-cross-functional-team/#respond Fri, 21 Apr 2023 16:04:22 +0000 https://prodsens.live/2023/04/21/pricing-and-packaging-working-as-a-cross-functional-team/ pricing-and-packaging:-working-as-a-cross-functional-team

Hey there! My name is Andrea Bailiff-Gush, and I’m the Head of Product Marketing at a startup called…

The post Pricing and packaging: Working as a cross-functional team appeared first on ProdSens.live.

]]>
pricing-and-packaging:-working-as-a-cross-functional-team

Pricing and packaging: Working as a cross-functional team

Hey there! My name is Andrea Bailiff-Gush, and I’m the Head of Product Marketing at a startup called AppOmni.

I think we all have superpowers as product marketers and my superpower is cross-functional collaboration, which is a big part of what we’re going to focus on today as we dive into how to price and package a product, make it a strategic revenue driver in your organization, and get a seat at the table when pricing decisions are being made.

Pricing and packaging: Working as a cross-functional team

In this article, I’ll be focusing on:

  • What pricing and packing is
  • B2B pricing and plans
  • Why product marketing needs to be involved in pricing decisions
  • The pricing dream team
  • How to build a pricing team
  • Pricing team objectives
  • How to communication pricing decisions internally
  • How to make pricing strategy a strategic revenue driver

The post Pricing and packaging: Working as a cross-functional team appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/04/21/pricing-and-packaging-working-as-a-cross-functional-team/feed/ 0