Ambika Aggarwal, Author at ProdSens.live https://prodsens.live/author/ambika-aggarwal/ News for Project Managers - PMI Sun, 28 Jan 2024 08:20:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Ambika Aggarwal, Author at ProdSens.live https://prodsens.live/author/ambika-aggarwal/ 32 32 Webpack Vs Parcel: A quick Look for Developers https://prodsens.live/2024/01/28/webpack-vs-parcel-a-quick-look-for-developers/?utm_source=rss&utm_medium=rss&utm_campaign=webpack-vs-parcel-a-quick-look-for-developers https://prodsens.live/2024/01/28/webpack-vs-parcel-a-quick-look-for-developers/#respond Sun, 28 Jan 2024 08:20:23 +0000 https://prodsens.live/2024/01/28/webpack-vs-parcel-a-quick-look-for-developers/ webpack-vs-parcel:-a-quick-look-for-developers

As developers, we constantly strive to improve our workflows and build efficient, performant applications. One critical aspect of…

The post Webpack Vs Parcel: A quick Look for Developers appeared first on ProdSens.live.

]]>
webpack-vs-parcel:-a-quick-look-for-developers

As developers, we constantly strive to improve our workflows and build efficient, performant applications. One critical aspect of web development is bundling and optimizing our code for production. Two popular tools that help us achieve this are Webpack and Parcel. In this article, we’ll take a closer look at these bundlers, exploring their key differences and helping you make an informed choice for your projects.

Introduction to Webpack and Parcel

Webpack: The Battle-Tested Bundler

Webpack has long been a staple in the JavaScript ecosystem, gaining popularity for its powerful features and extensive configuration options. It acts as a module bundler, combining various assets such as JavaScript, CSS, and images into a single, optimized bundle. Webpack’s ability to handle complex dependencies and support for hot module replacement (HMR) makes it a go-to choice for many developers.

Webpack relies on a configuration file (often named webpack.config.js) to define how the bundling process should be executed. While this configuration might seem daunting at first, it grants developers fine-grained control over the bundling process. However, the learning curve associated with Webpack configuration can be steep, especially for newcomers.

Parcel: The Zero-Configuration Bundler

In contrast, Parcel positions itself as a zero-configuration alternative to Webpack. It aims to simplify the bundling process by requiring minimal setup from developers. With Parcel, you can get started quickly without the need for an extensive configuration file. This makes it an attractive choice for small to medium-sized projects, where simplicity and rapid development are paramount.

Parcel’s ability to automatically handle various file types, including HTML, CSS, and JavaScript, makes it a compelling option for developers who prefer a more straightforward setup. The absence of a complex configuration file reduces the barrier to entry, allowing developers to focus on writing code rather than spending time tweaking bundler settings.

Key Differences Between Webpack and Parcel

Now that we’ve introduced Webpack and Parcel, let’s delve into the key differences that set them apart.

Configuration Overhead

One of the most noticeable distinctions between Webpack and Parcel is the configuration overhead. Webpack requires developers to create and maintain a configuration file, specifying how the bundling process should be executed. This file can become intricate as projects grow in complexity, demanding a good understanding of Webpack’s configuration options.

On the other hand, Parcel follows a zero-configuration philosophy. Developers can start using Parcel with minimal setup, as it automatically detects and bundles supported file types. This simplicity makes Parcel an attractive choice for beginners and those who prioritize quick development cycles.

Ecosystem and Community Support

Webpack boasts a mature and extensive ecosystem with a vast community of users. This means there are plenty of plugins and loaders available to extend Webpack’s functionality. The wealth of resources and community support makes it easier to find solutions to common issues and leverage best practices.

Parcel, being a relatively newer player in the bundler scene, has a growing ecosystem. While it may not match Webpack’s breadth of plugins and loaders, Parcel’s simplicity and ease of use have gained it a dedicated user base. Developers who appreciate a lightweight approach to bundling may find Parcel’s ecosystem sufficient for their needs.

Performance

Both Webpack and Parcel aim to optimize the performance of your web applications, but they take different approaches.

Webpack offers advanced optimization options, such as tree shaking, code splitting, and caching. These features enable developers to create smaller bundles and improve loading times, especially for larger applications. However, achieving optimal performance with Webpack may require careful configuration and tuning. To get Webpack optimization options you can check this link, Webpack Configuration.

Parcel, with its zero-configuration approach, simplifies performance optimization. It automatically applies best practices, such as minification and scope hoisting, without requiring explicit configuration. While this makes Parcel an excellent choice for quick development, developers may have less control over fine-tuning performance optimizations compared to Webpack. To get the documentation on Parcel optimization, go to Parcel Production Docs

Development Server and Hot Module Replacement

Webpack’s development server, coupled with its Hot Module Replacement (HMR) feature, has been a significant selling point. HMR allows developers to see changes in real time without a full page reload, enhancing the development experience. Webpack’s dev server is highly configurable, allowing developers to tailor it to their specific needs. The Webpack development server provides all the information about HMR and how to quickly develop an application.

Parcel, too, comes with a built-in development server and HMR support out of the box. While it may not offer the same level of customization as Webpack, Parcel’s zero-configuration setup makes it straightforward to use HMR without additional setup. HMR documentation of Parcel shows how to improve development experience.

Which One Should You Choose?

The decision between Webpack and Parcel ultimately depends on your project requirements, personal preferences, and the development team’s expertise.

Choose Webpack If:

  • You require fine-grained control over the bundling process.
  • Your project involves complex configurations and advanced optimizations.
  • You value a mature ecosystem with extensive community support.
  • Hot Module Replacement (HMR) is a crucial aspect of your development workflow.

Choose Parcel If:

  • You prioritize quick setup and a zero-configuration approach.
  • Your project is small to medium-sized, and simplicity is essential.
  • You are a beginner or prefer a more straightforward development process.
  • Fine-tuning performance optimizations is less critical for your use case.

Conclusion

In the Webpack vs Parcel debate, there is no one-size-fits-all answer. Both bundlers have their strengths and cater to different developer preferences and project requirements. Webpack, with its extensive configurability and mature ecosystem, remains a robust choice for large, complex projects. On the other hand, Parcel’s zero-configuration approach makes it an appealing option for developers who prioritize simplicity and speed of development.

Before making a decision, consider the specific needs of your project, your team’s familiarity with each tool, and the level of control and customization you require. Regardless of your choice, both Webpack and Parcel contribute to an ever-evolving landscape of tools that empower developers to build performant, scalable web applications.

The post Webpack Vs Parcel: A quick Look for Developers appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/28/webpack-vs-parcel-a-quick-look-for-developers/feed/ 0
Keeping ServiceNow Updated with Automated AWS Discovery https://prodsens.live/2024/01/24/keeping-servicenow-updated-with-automated-aws-discovery/?utm_source=rss&utm_medium=rss&utm_campaign=keeping-servicenow-updated-with-automated-aws-discovery https://prodsens.live/2024/01/24/keeping-servicenow-updated-with-automated-aws-discovery/#respond Wed, 24 Jan 2024 06:24:41 +0000 https://prodsens.live/2024/01/24/keeping-servicenow-updated-with-automated-aws-discovery/ keeping-servicenow-updated-with-automated-aws-discovery

As AWS builders, we know how fast cloud environments evolve. Resources get added, changed, and removed continuously. If…

The post Keeping ServiceNow Updated with Automated AWS Discovery appeared first on ProdSens.live.

]]>
keeping-servicenow-updated-with-automated-aws-discovery

As AWS builders, we know how fast cloud environments evolve. Resources get added, changed, and removed continuously. If your inventory reporting or cloud discovery is not capturing changes in real-time, before you know it, your CMDB is full of blindspots and outdated data.

Relying on manual cloud discovery and scheduled updates leads to inaccuracy. And when using native tools they often only cover a handful of core AWS services while leaving 100+ for custom development work.

Recently our enterprise customers were expressing these struggles with keeping accurate records of cloud resources in their ServiceNow CMDB. So as part of our last Launch Week we built an integration in Turbot Guardrails in response to help customers capture real-time resource changes from multi-cloud to ServiceNow.

Automated AWS Discovery with Guardrails

This AWS & ServiceNow integration via Turbot Guardrails provides a real-time automation to discover resources across 100+ AWS services. As your infrastructure changes, Guardrails detects it and handles updating integrated systems like ServiceNow.

It augments native discovery capabilities by:

  • Adding more comprehensive AWS resource coverage
  • Handling deletions and archiving records
  • Flexibly mapping data to different CMDB tables
  • Eliminating dependency on legacy scheduled jobs

This means you get complete visibility and accuracy as changes occur without the overhead.

How to configure automated AWS resource discovery for your ServiceNow CMDB

After you have integrated your ServiceNow instance to Turbot Guardrails; each AWS resource type can be configured to sync to the ServiceNow CMDB. Most often you would set the scope of the policy across many AWS resources from all your AWS accounts. In this example we will show how to enable syncing AWS S3 Buckets.

Simply set the Turbot Guardrails policy to “Enforce: Sync” and apply to all or specific AWS accounts:

Turbot Guardrails policy to Enforce AWS Syncing to ServiceNow

For the AWS account we enabled the integration for, the following AWS resources will be in scope for the AWS discovery:

AWS S3 Buckets Managed by Turbot Guardrails Example

Instantly the AWS resources will be added to the associated ServiceNow CMDB table:

AWS S3 Buckets Synced to ServiceNow by Turbot Guardrails Example

As AWS resources are added, updated, or deleted, Turbot Guardrails handles the configuration drift and keeps ServiceNow CMDB updated.

For example, when an AWS resource changes, Turbot Guardrails captures the configuration drift and updates ServiceNow CMDB:

AWS Configuration Drift Captured by Turbot Guardrails

AWS resource deletion can be managed as a complete synchronization where the record in ServiceNow is deleted as well, or archived to retain its record with an archive status.

Configure AWS Discovery to your CMDB Tables

You can configure the AWS to ServiceNow discovery sync behavior by:

  • Scoping to specific AWS services
  • Defining archive vs delete flow for resource deletions
  • Adding custom CMDB table columns and mappings

This level of control lets you tailor it to your unique CMDB table definitions whether directly to a table or table extension.

Keep your AWS to ServiceNow Discovery Simple

Managing AWS cloud discovery to ServiceNow does not need to be difficult and time-consuming. Using this Guardrails integration, you can automate AWS resource discovery across 100+ AWS services and sync to ServiceNow CMDB in just minutes. This can accelerate new integration efforts or augment existing methods with more accuracy and timely updates when changes occur.

Whether you are new to cloud discovery or looking to enhance existing capabilities, try a 14-day free trial by signing up directly with Turbot or through the AWS Marketplace.

The post Keeping ServiceNow Updated with Automated AWS Discovery appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/24/keeping-servicenow-updated-with-automated-aws-discovery/feed/ 0
How to build great hardware and software products Dominik Busching (Head of Product, tado) https://prodsens.live/2024/01/24/how-to-build-great-hardware-and-software-products-dominik-busching-head-of-product-tado/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-build-great-hardware-and-software-products-dominik-busching-head-of-product-tado https://prodsens.live/2024/01/24/how-to-build-great-hardware-and-software-products-dominik-busching-head-of-product-tado/#respond Wed, 24 Jan 2024 06:24:32 +0000 https://prodsens.live/2024/01/24/how-to-build-great-hardware-and-software-products-dominik-busching-head-of-product-tado/ Discussing the creation of outstanding digital products is a frequently addressed subject within the community. So on this…

The post How to build great hardware and software products Dominik Busching (Head of Product, tado) appeared first on ProdSens.live.

]]>
Discussing the creation of outstanding digital products is a frequently addressed subject within the community. So on this week’s podcast, we had the privilege of hosting Dominik Busching, Head of Product Management at Tado. In this episode, Dominik offers valuable insights into how product managers can effectively build and innovate with both hardware and software products. Read more »

The post How to build great hardware and software products Dominik Busching (Head of Product, tado) appeared first on Mind the Product.

The post How to build great hardware and software products Dominik Busching (Head of Product, tado) appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/24/how-to-build-great-hardware-and-software-products-dominik-busching-head-of-product-tado/feed/ 0
🎁 Shortcut to Find Open Source Projects 100x faster https://prodsens.live/2023/11/30/%f0%9f%8e%81-shortcut-to-find-open-source-projects-100x-faster/?utm_source=rss&utm_medium=rss&utm_campaign=%25f0%259f%258e%2581-shortcut-to-find-open-source-projects-100x-faster https://prodsens.live/2023/11/30/%f0%9f%8e%81-shortcut-to-find-open-source-projects-100x-faster/#respond Thu, 30 Nov 2023 14:24:54 +0000 https://prodsens.live/2023/11/30/%f0%9f%8e%81-shortcut-to-find-open-source-projects-100x-faster/ -shortcut-to-find-open-source-projects-100x-faster

Today, I’ve got 8 ways that can help you find your dream open-source project. Before delving into the…

The post 🎁 Shortcut to Find Open Source Projects 100x faster appeared first on ProdSens.live.

]]>
-shortcut-to-find-open-source-projects-100x-faster

Today, I’ve got 8 ways that can help you find your dream open-source project.

Before delving into the nitty gritty on how to find open-source projects, let’s first understand what open-source means.

Open Source is more than merging a PR

In a world where we are more connected than ever, being a part of an open-source community can be the key to unlocking new opportunities and achieving personal growth.

For me, it’s an opportunity to make a difference without needing a job, leaving an impact on millions of users.

You Code. Collaborate. Network.

But most important of all, you’re welcome, and you interact with experienced people all the time.

Tip: Pick good organizations rather than individual repositories for long-term benefit.

I have made 200+ Pull Requests and participated in over 400 discussions, so I am familiar with what is required and the standards of good open-source projects.

Most people struggle with how to find good open-source projects. This article provides numerous options tailored to be a perfect fit for you.

You can find trending repositories based on Spoken Language, Programming Language and Date.

These are all the elite repositories that can boost your credibility and reputation in open-source community.

Trending GitHub

 

2. GitHub Advanced Filters

If you want complete control over the search, then this option is a perfect fit for you.

You can filter using more than 60+ options, including Language, Number of Stars, Number of forks, License, Issues, and even Commits.

Advanced Filters page

 

3. Good First Issues

If you’re starting with open source, don’t make it more complex than it is.

Remember, issues suitable for new contributors are often labeled as good first issue or help wanted, helping you make your first contribution to open-source.

You can find several good first issues with the option to choose your preferred language through a friendly user interface.

Good First Issues

 

4. Up For Grabs

This option is one of the most popular websites to find good open-source projects.

You can filter by name and label, such as good first issues, and explore popular tags like opencv and android. Additionally, you can check when the repository was last updated.

Up for grabs

 

5. First Contributions

A website where you can search projects from a pre-defined list using your preferred language as a filter.

First Contributions

 

6. Quine

Quine helps you monetize your reputation by contributing to open source. They have their leaderboard, quests, and so many innovative features.

You can search for projects without signing up, but I highly recommend you sign up and explore.

The standout feature is that it displays PR merge time in hours, shows how many new contributors there are for the current month, and details the types of issues. It provides a clear idea about the project.

Quine

You can even add widgets to your profile. So, go ahead and explore.

quine widgets

 

7. OpenSauced

There are numerous stats that add credibility, offering relevant filter options such as Top 100 Repos, Minimum 5 Contributors, Recent, and Most Active.

You can review PR Velocity and PR Overview, and filter using language or tags.

OpenSauced

Among all the features that set Open Sauced apart, it recommends a few excellent repositories tailored to your open-source journey.

Recommendations

There are many more features, such as creating highlights to track activities in your chosen repositories.

 

8. GSOC Organizations

From my experience, I can say that the benefits of contributing to organizations are much more than individual repositories.

You must have heard about Google Summer of Code, in which reputable organizations participate.

Here, you can explore the list of all accepted organizations in Google Summer of Code with their tech-stack and the option to filter by topics and categories.

GSOC Organizations

If you’re keen on sponsoring this post, shoot me a message at anmolbaranwal119@gmail.com or hit me up on Twitter! 🚀

If you’ve got some fantastic suggestions up your sleeve, drop a comment, and I’ll happily add them to the post.

Who knows? You might discover new passions, make lifelong friends, and achieve personal growth beyond your wildest dreams. So take the first step and contribute to the open-source community. The world is waiting for you.

If you enjoy my content, show your support by following me on my GitHub & Twitter:

The post 🎁 Shortcut to Find Open Source Projects 100x faster appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/30/%f0%9f%8e%81-shortcut-to-find-open-source-projects-100x-faster/feed/ 0
Ownership: 5 ways to Amplify Your Software Engineering Success https://prodsens.live/2023/11/26/ownership-5-ways-to-amplify-your-software-engineering-success/?utm_source=rss&utm_medium=rss&utm_campaign=ownership-5-ways-to-amplify-your-software-engineering-success https://prodsens.live/2023/11/26/ownership-5-ways-to-amplify-your-software-engineering-success/#respond Sun, 26 Nov 2023 05:25:00 +0000 https://prodsens.live/2023/11/26/ownership-5-ways-to-amplify-your-software-engineering-success/ ownership:-5-ways-to-amplify-your-software-engineering-success

For software engineers at all levels, from beginners to seasoned veterans, taking ownership of your work is key…

The post Ownership: 5 ways to Amplify Your Software Engineering Success appeared first on ProdSens.live.

]]>
ownership:-5-ways-to-amplify-your-software-engineering-success

For software engineers at all levels, from beginners to seasoned veterans, taking ownership of your work is key to multiplying your success. It’s about being proactive, visionary, and communicative in your approach. Here’s how you can embody these principles in your daily work to achieve greater success in your engineering career.

1. Keep a Finger on the Pulse of Your Codebases

Whether you’re working in the frontend, backend, or full-stack, staying updated with every change in your codebase is essential. This means not just being aware of the changes you make but also understanding those made by your teammates. Regularly engage in code reviews and discussions about new features or fixes. This deep understanding of your domain helps you anticipate issues, innovate solutions, and collaborate more effectively with your team.

Never take code reviews for granted, it’s easy to get consumed with your own work and feel as if you’re too busy but code reviews are an opportunity for everyone involved to get better.

2. Guide Your Work from Start to Finish

Ownership means guiding your projects from conception to completion. This involves setting clear goals, planning effectively, and being accountable for the outcome. Whether you’re working on a small module or a large system, treat each task as your personal responsibility. Ensure that every phase of the project aligns with the overall objectives, and make adjustments as needed to stay on course. Think of whatever you’re working on as a business deal, and strive to be there every step of the way until the deal closes.

If someone has questions or issues around your feature, be the first person to step up and get things resolved.

3. Adopt a Long-Term Mindset

Developing a long-term perspective is crucial. Think about how your work fits into the larger picture and contributes to the company’s goals. Focus on creating sustainable, scalable solutions rather than quick fixes. This long-term mindset will not only enhance the quality of your work but also aid in your professional growth and development. When thinking about solution implementation, I like to ask myself if the pattern in question would be acceptable at scale. If you wouldn’t want the pattern repeated throughout the codebase, I would consider taking a step back and coming up with a different approach.

Chances are if you aren’t making the codebase better for your future self and those who come after you, you are making it worse.

4. Take Initiative

Don’t wait for instructions or opportunities to come to you. Be proactive in identifying areas for improvement or innovation in your projects. Propose new ideas, volunteer for challenging tasks, and be willing to step outside your comfort zone. Taking initiative demonstrates your commitment and can lead to more significant contributions and recognition in your role.

A good example of this is codebase tech debt and maintenance. I’d wager that all codebases will accumulate tech debt at some point, this is totally normal. What shouldn’t be accepted is letting that debt go unaddressed until the point of no return. If you notice that a codebase has tech debt and there are no plans to address it, strive to be the one who speaks up and starts to put a plan in place to ensure a better codebase.

5. Overcommunicate

Effective communication is vital in software engineering. Make it a point to clearly communicate your progress, challenges, and successes with your team and stakeholders. Ensure that there’s no ambiguity in your work and that everyone involved has the necessary information to make informed decisions. Overcommunicating doesn’t mean overwhelming others with details, but providing clarity and maintaining transparency.

This is especially important when working with another team that is crucial to a successful implementation. For example, If you’re a backend dev working on an endpoint, keep the potential consumers of the endpoint updated along the way from ideation to completion.

By staying informed, guiding your work, adopting a long-term view, taking initiative, and communicating effectively, you’re not just completing tasks – you’re building a foundation for lasting success in your engineering career. These five principles of ownership are essential for any software engineer looking to make a meaningful impact in their field.

The post Ownership: 5 ways to Amplify Your Software Engineering Success appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/26/ownership-5-ways-to-amplify-your-software-engineering-success/feed/ 0
Building a fully Type-Safe Event-Driven Backend in Go https://prodsens.live/2023/11/10/building-a-fully-type-safe-event-driven-backend-in-go/?utm_source=rss&utm_medium=rss&utm_campaign=building-a-fully-type-safe-event-driven-backend-in-go https://prodsens.live/2023/11/10/building-a-fully-type-safe-event-driven-backend-in-go/#respond Fri, 10 Nov 2023 14:25:40 +0000 https://prodsens.live/2023/11/10/building-a-fully-type-safe-event-driven-backend-in-go/ building-a-fully-type-safe-event-driven-backend-in-go

TL;DR This guide shows you how to build a fully Type-Safe event-driven backend in Go, implementing an Uptime…

The post Building a fully Type-Safe Event-Driven Backend in Go appeared first on ProdSens.live.

]]>
building-a-fully-type-safe-event-driven-backend-in-go

TL;DR

This guide shows you how to build a fully Type-Safe event-driven backend in Go, implementing an Uptime Monitoring system as an example.

We’ll be using Encore to build our backend, as it provides end-to-end type-safety including infrastructure(!).

💡Having type-safety in infrastructure is great because it means fewer bugs caused by things like message queues. You can easily identify issues during development and avoid post-deployment issues that affect users. More on this later!

type safety meme

🚀 What’s on deck:

  • Install Encore
  • Create your app from a starter branch
  • Run locally to try the frontend
  • Build the backend
  • Deploy to Encore’s free development cloud

✨ Final result:

Uptime Monitor

Demo app: Try the app

When we’re done, we’ll have a backend with this type-safe event-driven architecture:

Uptime Monitor Architecture
In this diagram (automatically generated by Encore) you can see individual services as white boxes, and Pub/Sub topics as black boxes.

🏁 Let’s go!

To make it easier to follow along, we’ve laid out a trail of croissants to guide your way.

Whenever you see a 🥐 it means there’s something for you to do!

💽 Install Encore

Install the Encore CLI to run your local environment:

  • macOS: brew install encoredev/tap/encore
  • Linux: curl -L https://encore.dev/install.sh | bash
  • Windows: iwr https://encore.dev/install.ps1 | iex

Create your Encore application

🥐 Create your new app from this starter branch with a ready-to-go frontend to use:

encore app create uptime --example=github.com/encoredev/example-app-uptime/tree/starting-point

💻 Run your app locally

🥐 Check that your frontend works by running your app locally.

cd uptime
encore run

You should see this:
Encore Run This means Encore has started your local environment and created local infrastructure for Pub/Sub and Databases.

Then visit http://localhost:4000/frontend/ to see the frontend.
The functionality won’t work yet, since we haven’t yet built the backend yet.

– Let’s do that now!

🔨 Create the monitor service

Let’s start by creating the functionality to check if a website is currently up or down.

Later we’ll store this result in a database so we can detect when the status changes and
send alerts.

🥐 Create a service named monitor containing a file named ping.go. With Encore, you do this by creating a Go package:

mkdir monitor
touch monitor/ping.go

🥐 Add an API endpoint named Ping that takes a URL as input and returns a response indicating whether the site is up or down.

With Encore you do this by creating a function and adding the //encore:api annotation to it.

Paste this into the ping.go file:

package monitor

import (
    "context"
    "net/http"
    "strings"
)

// PingResponse is the response from the Ping endpoint.
type PingResponse struct {
    Up bool `json:"up"`
}

// Ping pings a specific site and determines whether it's up or down right now.
//
//encore:api public path=/ping/*url
func Ping(ctx context.Context, url string) (*PingResponse, error) {
    // If the url does not start with "http:" or "https:", default to "https:".
    if !strings.HasPrefix(url, "http:") && !strings.HasPrefix(url, "https:") {
        url = "https://" + url
    }

    // Make an HTTP request to check if it's up.
    req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
    if err != nil {
        return nil, err
    }
    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        return &PingResponse{Up: false}, nil
    }
    resp.Body.Close()

    // 2xx and 3xx status codes are considered up
    up := resp.StatusCode < 400
    return &PingResponse{Up: up}, nil
}

🥐 Let’s try it! Make sure you have Docker installed and running, then run encore run in your terminal and you should see the service start up.

🥐 Now open up the Local Dev Dashboard running at http://localhost:9400 and try calling
the monitor.Ping endpoint, passing in google.com as the URL.

If you prefer to use the terminal instead run curl http://localhost:4000/ping/google.com in a new terminal instead. Either way, you should see the response:

{"up": true}

You can also try with httpstat.us/400 and some-non-existing-url.com and it should respond with {"up": false}.
(It’s always a good idea to test the negative case as well.)

🧪 Add a test

🥐 Let’s write an automated test so we don’t break this endpoint over time. Create the file monitor/ping_test.go and add this code:

package monitor

import (
    "context"
    "testing"
)

func TestPing(t *testing.T) {
    ctx := context.Background()
    tests := []struct {
        URL string
        Up  bool
    }{
        {"encore.dev", true},
        {"google.com", true},
        // Test both with and without "https://"
        {"httpbin.org/status/200", true},
        {"https://httpbin.org/status/200", true},

        // 4xx and 5xx should considered down.
        {"httpbin.org/status/400", false},
        {"https://httpbin.org/status/500", false},
        // Invalid URLs should be considered down.
        {"invalid://scheme", false},
    }

    for _, test := range tests {
        resp, err := Ping(ctx, test.URL)
        if err != nil {
            t.Errorf("url %s: unexpected error: %v", test.URL, err)
        } else if resp.Up != test.Up {
            t.Errorf("url %s: got up=%v, want %v", test.URL, resp.Up, test.Up)
        }
    }
}

🥐 Run encore test ./... to check that it all works as expected. You should see something like this:

$ encore test ./...
9:38AM INF starting request endpoint=Ping service=monitor test=TestPing
9:38AM INF request completed code=ok duration=71.861792 endpoint=Ping http_code=200 service=monitor test=TestPing
[... lots more lines ...]
PASS
ok      encore.app/monitor      1.660

🎉 It works. Well done!

🔨 Create site service

Next, we want to keep track of a list of websites to monitor.

Since most of these APIs will be simple CRUD (Create/Read/Update/Delete) endpoints, let’s build this service using GORM, an ORM library that makes building CRUD endpoints really simple.

🥐 Create a new service named site with a SQL database. To do so, create a new directory site in the application root with migrations folder inside that folder:

mkdir site
mkdir site/migrations

🥐 Add a database migration file inside that folder, named 1_create_tables.up.sql. The file name is important (it must look something like 1_.up.sql) as Encore uses the file name to automatically run migrations.

Add the following contents:

CREATE TABLE sites (
    id BIGSERIAL PRIMARY KEY,
    url TEXT NOT NULL
);

🥐 Next, install the GORM library and PostgreSQL driver:

go get -u gorm.io/gorm gorm.io/driver/postgres

Now let’s create the site service itself. To do this we’ll use Encore’s support for dependency injection to inject the GORM database connection.

🥐 Create site/service.go and add this code:

package site

import (
    "encore.dev/storage/sqldb"
    "gorm.io/driver/postgres"
    "gorm.io/gorm"
)

//encore:service
type Service struct {
    db *gorm.DB
}

var siteDB = sqldb.Named("site").Stdlib()

// initService initializes the site service.
// It is automatically called by Encore on service startup.
func initService() (*Service, error) {
    db, err := gorm.Open(postgres.New(postgres.Config{
        Conn: siteDB,
    }))
    if err != nil {
        return nil, err
    }
    return &Service{db: db}, nil
}

🥐 With that, we’re now ready to create our CRUD endpoints.
Create the following files:

site/get.go:

package site

import "context"

// Site describes a monitored site.
type Site struct {
    // ID is a unique ID for the site.
    ID int `json:"id"`
    // URL is the site's URL.
    URL string `json:"url"`
}

// Get gets a site by id.
//
//encore:api public method=GET path=/site/:siteID
func (s *Service) Get(ctx context.Context, siteID int) (*Site, error) {
    var site Site
    if err := s.db.Where("id = $1", siteID).First(&site).Error; err != nil {
        return nil, err
    }
    return &site, nil
}

site/add.go:

package site

import "context"

// AddParams are the parameters for adding a site to be monitored.
type AddParams struct {
    // URL is the URL of the site. If it doesn't contain a scheme
    // (like "http:" or "https:") it defaults to "https:".
    URL string `json:"url"`
}

// Add adds a new site to the list of monitored websites.
//
//encore:api public method=POST path=/site
func (s *Service) Add(ctx context.Context, p *AddParams) (*Site, error) {
    site := &Site{URL: p.URL}
    if err := s.db.Create(site).Error; err != nil {
        return nil, err
    }
    return site, nil
}

site/list.go:

package site

import "context"

type ListResponse struct {
    // Sites is the list of monitored sites.
    Sites []*Site `json:"sites"`
}

// List lists the monitored websites.
//
//encore:api public method=GET path=/site
func (s *Service) List(ctx context.Context) (*ListResponse, error) {
    var sites []*Site
    if err := s.db.Find(&sites).Error; err != nil {
        return nil, err
    }
    return &ListResponse{Sites: sites}, nil
}

site/delete.go:

package site

import "context"

// Delete deletes a site by id.
//
//encore:api public method=DELETE path=/site/:siteID
func (s *Service) Delete(ctx context.Context, siteID int) error {
    return s.db.Delete(&Site{ID: siteID}).Error
}

🥐 Restart encore run to cause the site database to be created, and then call the site.Add endpoint:

curl -X POST 'http://localhost:4000/site' -d '{"url": "https://encore.dev"}'
{
  "id": 1,
  "url": "https://encore.dev"
}

📝 Record uptime checks

In order to notify when a website goes down or comes back up, we need to track the previous state it was in.

To do so, let’s add a database to the monitor service as well.

🥐 Create the directory monitor/migrations and the file monitor/migrations/1_create_tables.up.sql:

CREATE TABLE checks (
    id BIGSERIAL PRIMARY KEY,
    site_id BIGINT NOT NULL,
    up BOOLEAN NOT NULL,
    checked_at TIMESTAMP WITH TIME ZONE NOT NULL
);

We’ll insert a database row every time we check if a site is up.

🥐 Add a new endpoint Check to the monitor service, that takes in a Site ID, pings the site, and inserts a database row in the checks table.

For this service we’ll use Encore’s sqldb package instead of GORM (in order to showcase both approaches).

Add this to monitor/check.go:


package monitor

import (
    "context"

    "encore.app/site"
    "encore.dev/storage/sqldb"
)

// Check checks a single site.
//
//encore:api public method=POST path=/check/:siteID
func Check(ctx context.Context, siteID int) error {
    site, err := site.Get(ctx, siteID)
    if err != nil {
        return err
    }
    result, err := Ping(ctx, site.URL)
    if err != nil {
        return err
    }
    _, err = sqldb.Exec(ctx, `
        INSERT INTO checks (site_id, up, checked_at)
        VALUES ($1, $2, NOW())
    `, site.ID, result.Up)
    return err
}

🥐 Restart encore run to cause the monitor database to be created, and then call the new monitor.Check endpoint:

curl -X POST 'http://localhost:4000/check/1'

🥐 Inspect the database to make sure everything worked:

encore db shell monitor

You should see this:


psql (14.4, server 14.2)
Type "help" for help.

monitor=> SELECT * FROM checks;
 id | site_id | up |          checked_at
----+---------+----+-------------------------------
  1 |       1 | t  | 2022-10-21 09:58:30.674265+00

If that’s what you see, everything’s working great!🎉

⏰ Add a cron job to check all sites

We now want to regularly check all the tracked sites so we can
immediately respond in case any of them go down.

We’ll create a new CheckAll API endpoint in the monitor service that will list all the tracked sites and check all of them.

🥐 Let’s extract some of the functionality we wrote for the
Check endpoint into a separate function.

In monitor/check.go it should look like so:

// Check checks a single site.
//
//encore:api public method=POST path=/check/:siteID
func Check(ctx context.Context, siteID int) error {
    site, err := site.Get(ctx, siteID)
    if err != nil {
        return err
    }
    return check(ctx, site)
}

func check(ctx context.Context, site *site.Site) error {
    result, err := Ping(ctx, site.URL)
    if err != nil {
        return err
    }
    _, err = sqldb.Exec(ctx, `
        INSERT INTO checks (site_id, up, checked_at)
        VALUES ($1, $2, NOW())
    `, site.ID, result.Up)
    return err
}

Now we’re ready to create our new CheckAll endpoint.

🥐 Create the new CheckAll endpoint inside monitor/check.go:

import "golang.org/x/sync/errgroup"

// CheckAll checks all sites.
//
//encore:api public method=POST path=/checkall
func CheckAll(ctx context.Context) error {
    // Get all the tracked sites.
    resp, err := site.List(ctx)
    if err != nil {
        return err
    }

    // Check up to 8 sites concurrently.
    g, ctx := errgroup.WithContext(ctx)
    g.SetLimit(8)
    for _, site := range resp.Sites {
        site := site // capture for closure
        g.Go(func() error {
            return check(ctx, site)
        })
    }
    return g.Wait()
}

This uses an errgroup to check up to 8 sites concurrently, aborting early if we encounter any error. (Note that a website being down is not treated as an error.)

🥐 Run go get golang.org/x/sync/errgroup to install that dependency.

🥐 Now that we have a CheckAll endpoint, define a cron job to automatically call it every 5 minutes.

Add this to monitor/check.go:

import "encore.dev/cron"

// Check all tracked sites every 5 minutes.
var _ = cron.NewJob("check-all", cron.JobConfig{
    Title:    "Check all sites",
    Endpoint: CheckAll,
    Every:    5 * cron.Minute,
})

Note: For ease of development, cron jobs are not triggered when running the application locally, but work when deploying the application to your cloud.

🚀 Deploy to Encore’s free development cloud

To try out your uptime monitor for real, let’s deploy it to Encore’s development cloud.

Encore comes with built-in CI/CD, and the deployment process is as simple as a git push encore.

(You can also integrate with GitHub to activate per Pull Request Preview Environments, learn more in the CI/CD docs.)

🥐 Deploy your app by running:

git add -A .
git commit -m 'Initial commit'
git push encore

Encore will now build and test your app, provision the needed infrastructure, and deploy your application to the cloud.

After triggering the deployment, you will see a URL where you can view its progress in Encore’s Cloud Dashboard. It will look something like: https://app.encore.dev/$APP_ID/deploys/...

From there you can also see metrics, traces, link your app to a GitHub repo to get automatic deploys on new commits, and connect your own AWS or GCP account to use for production deployment.

🥐 When the deploy has finished, you can try out your uptime monitor by going to:
https://staging-$APP_ID.encr.app/frontend.

You now have an Uptime Monitor running in the cloud, well done!✨

Publish Pub/Sub events when a site goes down

An uptime monitoring system isn’t very useful if it doesn’t
actually notify you when a site goes down.

To do so let’s add a Pub/Sub topic
on which we’ll publish a message every time a site transitions from being up to being down, or vice versa.

🔬 Type-Safe Infrastructure: Practical example

Normally, Pub/Sub mechanisms are blind to the data structures of the messages they handle. This is a common source of hard-to-catch errors that can be a nightmare to debug.

However, thanks to Encore’s Infrastructure SDK, you get fully type-safe infrastructure! You can now achieve end-to-end type-safety from the moment of publishing a message, right through to delivery. This not only eliminates those annoying hard-to-debug errors but also translates to major time savings for us developers.

— Now let’s actually implement it!👇

🥐 Define the topic using Encore’s Pub/Sub package in a new file, monitor/alerts.go:

package monitor

import "encore.dev/pubsub"

// TransitionEvent describes a transition of a monitored site
// from up->down or from down->up.
type TransitionEvent struct {
    // Site is the monitored site in question.
    Site *site.Site `json:"site"`
    // Up specifies whether the site is now up or down (the new value).
    Up bool `json:"up"`
}

// TransitionTopic is a pubsub topic with transition events for when a monitored site
// transitions from up->down or from down->up.
var TransitionTopic = pubsub.NewTopic[*TransitionEvent]("uptime-transition", pubsub.TopicConfig{
    DeliveryGuarantee: pubsub.AtLeastOnce,
})

Now let’s publish a message on the TransitionTopic if a site’s up/down state differs from the previous measurement.

🥐 Create a getPreviousMeasurement function in alerts.go to report the last up/down state:

import "encore.dev/storage/sqldb"

// getPreviousMeasurement reports whether the given site was
// up or down in the previous measurement.
func getPreviousMeasurement(ctx context.Context, siteID int) (up bool, err error) {
    err = sqldb.QueryRow(ctx, `
        SELECT up FROM checks
        WHERE site_id = $1
        ORDER BY checked_at DESC
        LIMIT 1
    `, siteID).Scan(&up)

    if errors.Is(err, sqldb.ErrNoRows) {
        // There was no previous ping; treat this as if the site was up before
        return true, nil
    } else if err != nil {
        return false, err
    }
    return up, nil
}

🥐 Now add a function in alerts.go to conditionally publish a message if the up/down state differs:

import "encore.app/site"

func publishOnTransition(ctx context.Context, site *site.Site, isUp bool) error {
    wasUp, err := getPreviousMeasurement(ctx, site.ID)
    if err != nil {
        return err
    }
    if isUp == wasUp {
        // Nothing to do
        return nil
    }
    _, err = TransitionTopic.Publish(ctx, &TransitionEvent{
        Site: site,
        Up:   isUp,
    })
    return err
}

🥐 Finally modify the check function in check.go to call the publishOnTransition function:

func check(ctx context.Context, site *site.Site) error {
    result, err := Ping(ctx, site.URL)
    if err != nil {
        return err
    }

    // Publish a Pub/Sub message if the site transitions
    // from up->down or from down->up.
    if err := publishOnTransition(ctx, site, result.Up); err != nil {
        return err
    }

    _, err = sqldb.Exec(ctx, `
        INSERT INTO checks (site_id, up, checked_at)
        VALUES ($1, $2, NOW())
    `, site.ID, result.Up)
    return err
}

Now the monitoring system will publish messages on the TransitionTopic whenever a monitored site transitions from up->down or from down->up.

However, it doesn’t know or care who actually listens to these messages. The truth is right now nobody does. So let’s fix that by adding a Pub/Sub subscriber that posts these events to Slack.

Send Slack notifications when a site goes down

🥐 Start by creating a Slack service slack/slack.go containing the following:

package slack

import (
    "bytes"
    "context"
    "encoding/json"
    "fmt"
    "io"
    "net/http"
)

type NotifyParams struct {
    // Text is the Slack message text to send.
    Text string `json:"text"`
}

// Notify sends a Slack message to a pre-configured channel using a
// Slack Incoming Webhook (see https://api.slack.com/messaging/webhooks).
//
//encore:api private
func Notify(ctx context.Context, p *NotifyParams) error {
    reqBody, err := json.Marshal(p)
    if err != nil {
        return err
    }
    req, err := http.NewRequestWithContext(ctx, "POST", secrets.SlackWebhookURL, bytes.NewReader(reqBody))
    if err != nil {
        return err
    }
    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        return err
    }
    defer resp.Body.Close()

    if resp.StatusCode >= 400 {
        body, _ := io.ReadAll(resp.Body)
        return fmt.Errorf("notify slack: %s: %s", resp.Status, body)
    }
    return nil
}

var secrets struct {
    // SlackWebhookURL defines the Slack webhook URL to send
    // uptime notifications to.
    SlackWebhookURL string
}

🥐 Now go to a Slack community of your choice (where you have permission to create a new Incoming Webhook). If you don’t have any, join the Encore Slack and ask in #help and we’re happy to help out.

🥐 Once you have the Webhook URL, save it as a secret using Encore’s built-in secrets manager:

encore secret set --local,dev,prod SlackWebhookURL

🥐 Test the slack.Notify endpoint by calling it via cURL:

curl 'http://localhost:4000/slack.Notify' -d '{"Text": "Testing Slack webhook"}'

You should see the Testing Slack webhook message appear in the Slack channel you designated for the webhook.

🥐 It’s now time to add a Pub/Sub subscriber to automatically notify Slack when a monitored site goes up or down. Add the following to slack/slack.go:

import (
    "encore.dev/pubsub"
    "encore.app/monitor"
)

var _ = pubsub.NewSubscription(monitor.TransitionTopic, "slack-notification", pubsub.SubscriptionConfig[*monitor.TransitionEvent]{
    Handler: func(ctx context.Context, event *monitor.TransitionEvent) error {
        // Compose our message.
        msg := fmt.Sprintf("*%s is down!*", event.Site.URL)
        if event.Up {
            msg = fmt.Sprintf("*%s is back up.*", event.Site.URL)
        }

        // Send the Slack notification.
        return Notify(ctx, &NotifyParams{Text: msg})
    },
})

🎉 Deploy your finished Uptime Monitor

You’re now ready to deploy your finished Uptime Monitor, complete with a Slack integration!

🥐 As before, deploying your app to the cloud is as simple as running:

git add -A .
git commit -m 'Add slack integration'
git push encore

You now have a fully featured, production-ready, Uptime Monitoring system running in the cloud. Well done! ✨

🤯 Wrapping up: All of this came in at just over 300 lines of code

You’ve now built a fully functioning uptime monitoring system, accomplishing a remarkable amount with very little code:

  • You’ve built three different services (site, monitor, and slack)
  • You’ve added two databases (to the site and monitor services) for tracking monitored sites and the monitoring results
  • You’ve added a cron job for automatically checking the sites every 5 minutes
  • You’ve set up a fully type-safe Pub/Sub implementation to decouple the monitoring system from the Slack notifications
  • You’ve added a Slack integration, using secrets to securely store the webhook URL, listening to a Pub/Sub subscription for up/down transition events

All of this in just a bit over 300 lines of code!🤯

🎉 Great job – you’re done!

Keep building with these Open Source App Templates.👈

If you have questions or want to share your work, join the developers hangout in Encore’s community Slack.👈

The post Building a fully Type-Safe Event-Driven Backend in Go appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/10/building-a-fully-type-safe-event-driven-backend-in-go/feed/ 0
My Journey as Project Admin in GSSoC’23 https://prodsens.live/2023/09/15/my-journey-as-project-admin-in-gssoc23/?utm_source=rss&utm_medium=rss&utm_campaign=my-journey-as-project-admin-in-gssoc23 https://prodsens.live/2023/09/15/my-journey-as-project-admin-in-gssoc23/#respond Fri, 15 Sep 2023 19:24:47 +0000 https://prodsens.live/2023/09/15/my-journey-as-project-admin-in-gssoc23/ my-journey-as-project-admin-in-gssoc’23

Greetings! I’m Shwet Khatri, currently enrolled in a master’s program for Computer Science and Engineering at The LNM…

The post My Journey as Project Admin in GSSoC’23 appeared first on ProdSens.live.

]]>
my-journey-as-project-admin-in-gssoc’23

Greetings! I’m Shwet Khatri, currently enrolled in a master’s program for Computer Science and Engineering at The LNM Institute of Information Technology in Jaipur, India. My passion lies in crafting innovative real-world solutions through full-stack web development, utilizing the latest technologies. Additionally, I’m an avid supporter of open-source initiatives and enjoy engaging in global collaborations to contribute to open-source software projects.

I’ve just wrapped up my role as a Project Admin in the GSSoC (GirlScript Summer of Code) for the summer of 2023. Encouraged by this enriching experience, I’ve made the decision to document my entire journey in a blog. This blog will encompass everything from the initial selection phase to the final culmination of the program. My objective with this blog is to share the valuable insights I’ve acquired and offer guidance on how you too can become part of this exceptional program.

So, What is Open-Source?

Open Source is all about people working together to create software and projects. It focuses on things like being open, easy to use, and coming up with new ideas as a group. In Open Source projects, the instructions that make the software work are available to everyone. This means anyone can look at them, change them, and share them. This helps build a friendly community where many different people can share their knowledge and ideas, making everyone feel like they’re part of a team and empowering them to do great things.

Now, What is the GirlScript Summer of Code?

The GirlScript Summer of Code is a three-month-long program that happens every summer, organized by the Girlscript Foundation. During these three months, participants work on various open-source projects under the close guidance of experienced mentors. This opportunity allows students to start contributing to real-world projects right from their own homes, gaining valuable experience and exposure in the process.

GSSOC Logo

How does this program work?

The program kicks off in May, offering three different roles for participation: Contributors, Mentors, and Project Admins. Project Admins initiate the process by proposing their projects to attract open-source contributors. Following this, Mentors step in to guide these contributors in their work, collaborating closely with the Project Admins. Last but not least, we have contributors who actively engage in contributing to the projects of their choice during the dedicated contribution period.

In May, projects are selected for the program, and Mentors are matched with projects that align with their interests and skills. Contributors can then freely register for the program and begin their contributions, receiving guidance from both Mentors and Project Admins. One remarkable aspect of this program is its inclusivity, as there are no specific selection criteria for contributors. It’s open to everyone, welcoming beginners to join and learn.

Throughout the contribution period, program maintainers keep a Leaderboard, scoring contributors based on their merged pull requests across different projects. This encourages contributors to explore various projects, technologies, and ideas during the three-month program. At the program’s conclusion, top performers are rewarded with exciting prizes and swag items as a token of appreciation for their outstanding contributions.

How did I get selected?

Being a fan of open-source, I was already familiar with GSSoC. When I learned that it was taking place this summer, I promptly made up my mind to elevate my involvement by taking on the role of a Project Admin this year. At that point, I didn’t have a fully developed project, but I did possess a promising project idea and a clear vision of how I would collaborate with open-source contributors to bring it to life during GSSoC’23.

Following the specified requirements, I responded to a series of questions regarding my project, its vision, and my personal journey in the realm of open-source. Additionally, I diligently prepared and submitted a demo video. This video showcased the current state of my project, outlined the future objectives I aimed to accomplish, and highlighted how it could serve as a valuable learning experience for contributors interested in XR, offering them the chance to actively participate in an open-source project.

In just a matter of days, I received the exciting news that my project, AR-Webstore, had been chosen as one of the 100 projects participating in this year’s program. I was truly delighted and filled with anticipation as I embarked on this new chapter of my journey.

AR-Webstore Selected for GSSOC'23

About the project

Traditional e-commerce platforms fail to deliver immersive product experiences, leaving customers uncertain about the look, fit, and functionality of items. This lack of visualization leads to higher return rates as products may not meet expectations. Additionally, customer engagement suffers due to the limited ability to interact with and explore products online.

AR-Webstore revolutionizes the shopping experience by seamlessly integrating augmented reality. With AR-Webstore, customers can visualize products in their own spaces and view all the virtual features more clearly. This empowers customers to make informed decisions, reduces return rates, and enhances engagement, resulting in a more satisfying and immersive shopping journey.

AR-Webstore Preview

The future goals of AR-Webstore include building a complete end-to-end e-commerce platform that can provide an immersive online shopping experience. Also, Make the products interactive in a real environment using ML-AI rather than just demonstrating static 3D models.

My three months long journey …

My three-month journey has been quite the roller coaster ride. In my role as a project maintainer, I encountered various challenges along the way, but I was also able to swiftly identify and implement effective solutions for them. I highly recommend familiarizing yourself with these potential challenges in advance if you have plans to open-source or make one of your projects public.

Here are my suggestions:

  • Build your Team: Describe your project idea, the problem you’re solving, and the exposure contributors will get in-depth and share it on different social media platforms to get the best contributors aligned with your project.

  • Simple but solid tech stack: Keep the tech stack of the project easy to set up and something that most of the contributors already know. It will fascinate them to connect with your project better in the initial phase. For that, you should have good documentation for the new contributors who have no experience.

  • Divide and Rule: Divide your project’s final outcome into small tasks and assign it to the contributors while providing the deadline for each task. Raise issues at regular intervals for those tasks with descriptive objectives.

  • Maintainability and Sustainability: Set up good DevOps for your project as well which automates the process of deployment. Deployment previews on PRs really help in reviewing PRs quickly.

  • Align the team with the project: Provide a good reason about everything like why you think the contributor’s approach is not good or why yours is better. It will help them align with the project’s objectives.

  • Ask, Mentor, and Repeat: Encourage the contributors who are really interested in contributing to your project for a longer period and have active communication with them about asking for their updates, solving their doubts, and reviewing their work. To achieve a better understanding of each other, you may conduct online biweekly / monthly meetings with your contributors.

  • Learn in Public: Have a common public channel for all the contributors where they can post their queries or updates and keep maintainers to resolve their queries on a daily basis.

These points proved instrumental in enabling both myself and the contributors to maximize the benefits of this program. As the program concluded after three months, I found that our project, which had transitioned from being solely mine to a collective effort, had yielded far more than I initially anticipated.

During the contribution period, AR-Webstore got 60+ Forks, 20+ Stars, and 50+ merged PRs. I’d recommend the readers of this blog try out our collective efforts Live Here and give it a start on GitHub.

AR-Webstore Stats

Why should you definitely go for this program?

One of the most significant benefits of participating in this program (or other paid/non-paid open-source initiatives) is the opportunity to expand one’s connections and engage with the community. This opens doors to valuable networking prospects and provides access to helpful feedback, fostering personal and professional growth.

I chose to participate in this program with the goal of sharing the knowledge I’ve accumulated over the years and gaining insights into managing a project with a diverse group of contributors. The program exceeded my expectations, offering me valuable experiences beyond what I had initially anticipated.

I highly recommend that anyone who has previously been involved in programs like GSSoC or similar initiatives as a contributor should consider stepping into roles like mentorship or project administration. Doing so will provide you with a deeper understanding of the responsibilities and challenges associated with being an open-source maintainer.

What’s Next?

While the GSSoC journey has concluded this year, the AR-Webstore’s journey is just beginning. My aspiration is to elevate it to new heights by creating a groundbreaking e-commerce product. I aim to extend this project to other open-source programs to attract highly skilled developers. This endeavor will not only foster a community of individuals keen on mastering XR through open-source solutions but also allow me to contribute my utmost to the growth and prosperity of this community.

Conclusion

After three months, time has flown by swiftly. I didn’t want this program to conclude, but as the saying goes, all good things must eventually come to an end.

I’ve been genuinely amazed by the warm and supportive community I encountered during GSSoC. The program has been consistently supporting new contributors, and this year has been a delightful experience for me. I even had the opportunity to mentor these contributors and share my own experiences. I’m eagerly looking forward to remaining an active and engaged part of this community indefinitely.

Adios

Thank you for reading this far! I hope you found this blog informative and gained valuable insights into the details of the GirlScript Summer of Code Program. Sharing knowledge and experiences is a great way to foster collaboration and learning in the tech community. Best of luck with your future endeavors!

If you have any further questions about the program, open-source development, or technology in general, or if you’d like to see my work, please don’t hesitate to reach out to me on my social media accounts listed below:

Linkedin : https://www.linkedin.com/in/shwet-khatri

GitHub : https://github.com/ShwetKhatri2001

Twitter : https://twitter.com/shwetkhatri2001

Portfolio : https://shwetkhatri.netlify.app/

Till then, Keep Building 🚀, Keep Contributing 🙂

The post My Journey as Project Admin in GSSoC’23 appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/15/my-journey-as-project-admin-in-gssoc23/feed/ 0
Top React Libraries Every Developer Must Know https://prodsens.live/2023/09/10/top-react-libraries-every-developer-must-know/?utm_source=rss&utm_medium=rss&utm_campaign=top-react-libraries-every-developer-must-know https://prodsens.live/2023/09/10/top-react-libraries-every-developer-must-know/#respond Sun, 10 Sep 2023 05:25:39 +0000 https://prodsens.live/2023/09/10/top-react-libraries-every-developer-must-know/ top-react-libraries-every-developer-must-know

React is taking over the world of web development – and for good reason. Created by Facebook and…

The post Top React Libraries Every Developer Must Know appeared first on ProdSens.live.

]]>
top-react-libraries-every-developer-must-know

React is taking over the world of web development – and for good reason. Created by Facebook and released in 2013, React is an open-source JavaScript library for building user interfaces that has revolutionized front-end development.

In this comprehensive guide for beginners, we’ll cover all the core concepts you need to know to get up and running with React. You’ll learn fundamental topics like:

  • How React works and why it’s useful
  • JSX syntax
  • Components, props, and state
  • Handling events
  • The virtual DOM
  • Hooks
  • Working with forms
  • Fetching data
  • Routing

By the end, you’ll have a solid grasp of React fundamentals and be ready to build your applications. Let’s get started!

How React Works

At its core, React is all about components. A React application comprises multiple reusable components, each responsible for rendering a small part of the overall UI.

For example, you could have a Navbar component, Sidebar component, Form component, etc.

Each component manages its internal state and renders UI based on that state.When the state of a component changes, React will efficiently update and re-render only the components that need to be re-rendered. This is possible thanks to React’s use of a virtual DOM.

The virtual DOM is a JavaScript representation of the actual DOM. When a component’s state changes, React compares the resulting virtual DOM against the previous virtual DOM.

It then figures out the minimal set of actual DOM manipulations needed to sync them up.

This means you don’t have to worry about changing the DOM yourself – React handles it automatically behind the scenes.

This ultimately allows for much faster UI updates than traditional JavaScript apps manually manipulating the DOM.

React also makes use of a one-way data flow. State is passed down from parent components to child components through props.

When state needs to be updated, it’s done through callback functions, rather than directly modifying the state.

This unidirectional data flow makes tracking state management in your app easier. It also helps isolate components, since each one works with a local copy of state, rather than relying on global state.

In summary, here are some of the key advantages of React:

  • Declarative – React uses declarative code to render UI based on state rather than imperative code that updates DOM manually.
  • Component-based – Build encapsulated components that manage their own state.
  • Learn once, write anywhere – React can be used for web, mobile, VR, and even native desktop apps.
  • High performance – The virtual DOM makes React extremely fast and efficient.

JSX Syntax

React uses a syntax extension of JavaScript called JSX to describe what the UI should look like. JSX looks like a combination of HTML and JavaScript:

const element =

Hello, world!

;

This syntax is processed into standard JavaScript function calls and objects. Babel is typically used in React apps to convert JSX code into regular JavaScript that browsers can understand.

One key difference is that JSX uses className instead of class for adding CSS classes, since class is a reserved word in JavaScript.

You can embed any valid JavaScript expression inside JSX code by wrapping it with curly braces:

const name = 'John';
const element = 

Hello, {name}

;

JSX elements can have attributes just like HTML elements can. However, you can’t use keywords like class and for since they are reserved in JavaScript. Instead, React DOM components expect attributes like htmlFor and className:

const element = &lt;div className="container"&gt;
  &lt;label htmlFor="name"&gt;Enter name:&lt;/label>;
  &lt;input id="name" /&gt;
&lt;/div>code></pre>
You can also nest child elements inside a parent JSX element:
<pre><code class="language-javascript">const element = (
  &lt;div&gt;
    &lt;h1&gt;I am the title&lt;/h1>;
    &lt;p&gt;This is a paragraph&lt;/p>;
  &lt;/div>;
)

JSX allows us to write markup that looks like HTML but also lets us use the full power of JavaScript inside that markup. This is what makes React so useful for UI development.

Components, Props, and State

Components are the building blocks of any React app. A component is a self-contained UI piece that encapsulates markup and logic.

Here’s an example of a simple Button component:

function Button(props) {
  return &lt;button&gt;{props.text}&lt;/button>;
}

This function accepts props as an argument, accesses the text property on props, and returns JSX that displays a button element.

Components can be either functions or classes. Functional components are simpler since they only need to receive props and return JSX.

Once you have a component, you can render it by passing JSX to ReactDOM.render():

const root = document.getElementById('root');
ReactDOM.render(
  &lt;Button text="Click me"/&gt;,
  root
);

Components can be nested inside other components to build complex UIs:

function App() {
  return (
    &lt;div&gt;
      &lt;Button text="Save" /&gt;
      &lt;Button text="Cancel" /&gt;
    &lt;/div>;
  )
}

Props are how data gets passed into components. They are immutable and should not be changed inside the component.

State holds data that can change over time, triggering UI updates. State should be initialized when a component is created:

class Counter extends React.Component {
  constructor(props) {
    super(props);
    this.state = {count: 0};
  }
}

State should only be modified using setState():

this.setState({count: this.state.count + 1});

Calling setState() triggers React to re-render the component with the new state value. This is how React keeps the UI in sync with data.

Handling Events

Handling events with React elements is similar to handling events with DOM elements. There are a few key syntax differences:

  • React events are named using camelCase instead of lowercase (onclick becomes onClick)
  • You pass a function as the event handler rather than a string

For example, to handle a click event:

function Button(props) {
  function handleClick() {
    console.log('Clicked!');
  }
  return &lt;button onClick={handleClick}&gt;Click Me&lt;/button>;
}

Note how handleClick is a normal JS function containing any code you want to run when the element is clicked.

You can also bind event handlers in the constructor:

class Button extends React.Component {
  constructor(props) {
    super(props);
    this.handleClick = this.handleClick.bind(this);
  }
  handleClick() {

    // event handler logic

  }
  render() {
    return &lt;button onClick={this.handleClick}&gt;Click Me&lt;/button>;
  }
}

The bind call creates a new function scoped to the component instance. This allows you to access props and state them correctly inside the handler.

You can pass custom arguments to event handlers, too: