Richard Chen, Author at ProdSens.live https://prodsens.live/author/richard-chen/ News for Project Managers - PMI Mon, 04 Mar 2024 04:21:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Richard Chen, Author at ProdSens.live https://prodsens.live/author/richard-chen/ 32 32 JS Toolbox 2024: Essential Picks for Modern Developers Series Overview https://prodsens.live/2024/03/04/js-toolbox-2024-essential-picks-for-modern-developers-series-overview/?utm_source=rss&utm_medium=rss&utm_campaign=js-toolbox-2024-essential-picks-for-modern-developers-series-overview https://prodsens.live/2024/03/04/js-toolbox-2024-essential-picks-for-modern-developers-series-overview/#respond Mon, 04 Mar 2024 04:21:11 +0000 https://prodsens.live/2024/03/04/js-toolbox-2024-essential-picks-for-modern-developers-series-overview/ js-toolbox-2024:-essential-picks-for-modern-developers-series-overview

Introducing JS Toolbox 2024 Staying ahead of the curve in JavaScript development requires keeping on top of the…

The post JS Toolbox 2024: Essential Picks for Modern Developers Series Overview appeared first on ProdSens.live.

]]>
js-toolbox-2024:-essential-picks-for-modern-developers-series-overview

Introducing JS Toolbox 2024

Staying ahead of the curve in JavaScript development requires keeping on top of the ever-evolving landscape of tools and technologies. As we head into 2024, the sprawling world of JavaScript development tools will continue to transform, offering more refined, efficient, and user-friendly options.

This series, JS Toolbox 2024, is your one-stop for a comprehensive overview of the latest and most impactful tools in the JavaScript ecosystem.

Across the series, we’ll look at various tools, including runtime environments, package managers, frameworks, static site generators, bundlers, and test frameworks. This series will help you effectively choose the best available tools, analysing their functionalities, strengths, weaknesses, and how they fit into the modern JavaScript development process.

Whether you’re a seasoned developer or just starting out, this series will give you the necessary knowledge to select the right tools for your projects in 2024. Here’s what we’ll be getting into in this series.

Series overview:

  1. Runtime Environments and Package Management

In the first instalment, we explore runtime environments, focusing on Node, Deno and Bun. We’ll gain insights into their histories, performance metrics, community support, and ease of use.

The segment on package management tools compares npm, yarn, and pnpm, highlighting their performance and security features. We provide tips for choosing the most suitable package manager for your project.

  1. Frameworks and Static Site Generators

The next post in the series provides a thorough comparison of popular frameworks like React, Vue, Angular, and Svelte, focusing on their unique features and suitability for different project types.

We also take a look into static site generators, covering Astro, Nuxt, Hugo, Gatsby, and Jekyll. We take a detailed look into their usability, performance, and community support.

  1. Bundlers and Test Frameworks

In part 3 We jump into the world of bundlers, comparing webpack, esbuild, vite, and parcel 2. This section aims to guide developers through each bundler, focusing on their performance, compatibility, and ease of use.

The test frameworks section provides an in-depth look at MochaJS, Jest, Jasmine, Puppeteer, Selenium, and Playwright. We review each framework’s ease of use, community support, and overall robustness, supplemented with example unit tests.

What you’ll take away

JS Toolbox 2024 provides an informed view of the current JavaScript development tools landscape. Through personal insights and industry observations, I’ve aimed to guide developers through a selection of the myriad tools available, helping you make educated decisions that meet your project needs and personal preferences. Be sure to check out the entire series for insights that could shape your development workflow and tool choices in 2024 and beyond.

Like you, I’m always curious and looking to learn. If I’ve overlooked a noteworthy tool or if you have any feedback to share, reach out on Twitter or LinkedIn.

The post JS Toolbox 2024: Essential Picks for Modern Developers Series Overview appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/03/04/js-toolbox-2024-essential-picks-for-modern-developers-series-overview/feed/ 0
Workload Analysis: Steps, Examples & Tools https://prodsens.live/2023/12/28/workload-analysis-steps-examples-tools/?utm_source=rss&utm_medium=rss&utm_campaign=workload-analysis-steps-examples-tools https://prodsens.live/2023/12/28/workload-analysis-steps-examples-tools/#respond Thu, 28 Dec 2023 20:24:22 +0000 https://prodsens.live/2023/12/28/workload-analysis-steps-examples-tools/ workload-analysis:-steps,-examples-&-tools

Your team is your most valuable resource when executing a project. The ability to know who you need…

The post Workload Analysis: Steps, Examples & Tools appeared first on ProdSens.live.

]]>
workload-analysis:-steps,-examples-&-tools

Your team is your most valuable resource when executing a project. The ability to know who you need and when you need them for the project is what workload analysis is all about. As you can imagine, workload planning is essential for project success.

We’ll explain what workload analysis is and when you should be using it in your project. Then we’ll outline the steps you need to take for workload distribution, list some tools you can use for workload capacity and why it’s important. Then we’ll throw in some free templates to help with workload analysis.

What Is Workload Analysis?

Workload analysis is how project managers figure out how many team members they will need to properly execute a project. It not only deals with workload planning but also workload balance to ensure that no one person is overallocated, which threatens burnout and can erode morale.

Being able to see the workload of your team during the execution phase of the project is part of the monitoring and controlling phase. Workload tracking helps project managers reallocate team members as needed to serve the project and not overburden any one member of the project team.

Workload analysis is part of the larger workload management and is an ongoing process throughout the execution of the project. A project manager will always be tracking workload and analyzing workload to catch red flags and make sure they’re optimizing their team and not keeping anyone waiting on the sidelines.

Project management software helps with workload analysis and capacity planning. ProjectManager is award-winning project management software that has resource management tools to keep teams productive without risking burnout and poor morale. Project managers can view our color-coded workload chart, which makes it easy to see who is overallocated. Then you can reallocate your resources right from the chart to balance the team’s workload and keep them productive and happy. Get started with ProjectManager today for free.

ProjectManager's workload chart
ProjectManager’s resource management tools balance your team’s workload. Learn more

When Should You Use Workload Analysis?

Workload analysis comes into play most with two scenarios: optimizing current business processes and planning new projects. In terms of the former, resource allocation is crucial to keep business processes and operations running smoothly. They need to have the right resources at the right time to keep the business doing what it does.

Businesses risk delays and worse if they ignore workload analysis and think they can just assign their team’s work and not constantly be tweaking their workload to keep them working at capacity. The same is true when planning new projects.

When a project is approved, there’s likely already a plan and budget in place. But that project schedule needs to align with your resources and you have to know who’s available and when they’re available to work. Then, once assigned, you need to track their work and make sure they’re not overallocated. Without workload analysis, projects can quickly go over budget and miss important deadlines.

Workload Analysis Steps

Workload analysis starts by being able to monitor your team. But it’s more than just knowing what they’re doing, though that helps. You need to follow these steps to gain valuable insights into managing the team’s workload.

1. Identify Your Project Goals

The first step is to define the project goals. All projects have a goal and knowing that will inform the rest of the project, from start to finish. Without a clear idea of your goal for the project, you’re never going to accomplish anything and just waste time, money and resources.

2. Define the Scope of Your Project

To understand where your team needs to be allocated you first have to understand the project scope. This will help you see what’s ahead in the project and, in so doing, you’ll be able to create tasks that’ll inform the workload for your team.

3. Estimate the Resources That You Need

Resources aren’t only people, but raw materials, equipment and anything else that the team needs to accomplish the task assigned to them. Therefore, you need to forecast what resources you’ll need, which includes your team, their skill sets and nonhuman resources. The best way to do this is task by task. You can use a work breakdown structure to identify all the project deliverables and the tasks needed to deliver them.

4. Estimate Costs and Create a Budget

Now that you’ve estimated the resources you’ll need, it’s time to figure out how much each of these will cost. This will lead to the creation of the project budget that is part of any project. You’ll request those funds to deliver the project, therefore, it should be as accurate as you can make it. These cost estimations are particularly important because they will determine if it’s possible to hire more employees.

5. Create a Timeline

You have all the elements to create a schedule. Map your tasks on a visual timeline that starts at the beginning of the project and ends with its completion. You’ll want to link task dependencies to avoid delays, add milestones to help you track progress and give each task a start and end date.

6. Compare Your Current Resource Capacity With Your Resource Requirements

At this point, you can look at the resources you have and compare them to what your project requires. You might have enough resources, too many resources or not enough resources. This will determine if you have to allocate more resources to the project.

7. Assemble a Team With the Necessary Skills

Onboard the team you need to execute the project. They should have the skills that are required to execute the project properly. You’re also looking for a team that can work together, whether in the office or remotely. If they’ve never worked together, you can set up some team-building exercises to help them bond.

8. Balance Workload Distribution by Evenly Assigning Tasks

With the schedule in place, you can start assigning tasks to the team. Be sure to evenly distribute the tasks across your team. As noted above, if one team member is carrying too heavy a log it’s going to put a drag on the project and can erode morale.

9. Monitor Resource Availability & Utilization Throughout the Project

To avoid overallocation and to ensure that your team is keeping to the schedule, you’ll want to monitor your resource availability and ensure that you have what you need when you need it. Throughout the project, you can reallocate resources as needed to keep the project on track and the workload balanced.

Workload Analysis Example

To better illustrate what workload analysis is, let’s imagine a project manager in a manufacturing company who has been given a project to create 100 widgets. The manufacturing facility can produce a thousand widgets a day with a crew of 10, but the factory is already being used to create 500 different widgets.

The project manager will have to look over the resources available. In this case, the other project is using only a five-member team to deliver the 500 widgets. That leaves five workers unallocated. The project manager can take one of those five unassigned workers and get them on his project, which should be completed within a day.

During the run of those 100 widgets, however, the project manager will want to monitor the progress of the person working on his project. Maybe the project manager could put all five employees to work on the project and get it done five times faster. Maybe some of those unassigned workers will be assigned to other duties.

The project manager must keep an eye on the availability of the manufacturing crew. If there’s time to wait a day or longer, the project manager won’t have to allocate more than one or two employees to the project. However, if the availability of resources is limited, the project manager will have to figure out how best to allocate the resources and get the project done. That’s workload analysis.

Workload Analysis Tools

Our workload analysis example was simple, but it helps to wrap your head around the process. Of course, these calculations don’t need to be done by hand. There are workload analysis tools that can help. Here are a few.

  • Workload charts: Displays tasks on a calendar grid showing each team member’s task allocation or workload.
  • Timesheets: A physical or digital tool for recording and tracking the hours each team member spends working.
  • Workload tracking dashboards: Monitors workload by converting data into graphs and charts.

Benefits of Workload Analysis

Being able to monitor every aspect of a project is a critical part of project management. You can control a project and keep it on track if you’re unaware of what’s happening on a day-to-day basis. This is certainly true with workload analysis and these are some of the reasons why.

Helps Avoid Resource Under- and Overallocation

Being able to keep your team’s workload balanced is essential for many reasons. If they’re overallocated, you risk burnout and an erosion of team morale, which is detrimental to your project’s success. On the other hand, if your team is underallocated, then they’re not working at capacity and being as productive. Workload analysis keeps an eye on allocation and allows project managers to apply workload balance.

Reveals Skill Gaps and Hiring Needs

Another benefit of workload analysis is that it provides a window into your team’s skill sets. You can tell if they need more training, which serves both them and the project. At the same time, by monitoring the workload, a project manager can see if more resources are needed to complete the work on schedule. They can bring this data to the executive team to show the need for new hires.

Helps Complete Projects on Time

Workload analysis is one of the tools in a project manager’s toolbox to deliver projects on time and within budget. Resources cost money. If your team is behind schedule it’s going to cut into your budget. The project might reach completion, but it missed its deadline and cost more than it was funded for. However, workload analysis helps to keep at least the labor part of the equation sound so teams are working at capacity and staying productive.

Provides a Healthy Work Environment

We’ve mentioned morale and the danger of eroding morale a few times, but it’s worth going into a bit more because it’s such an important part of workload analysis. If your team is unhappy, they’re not going to deliver their best. Overburdening them with tasks is a sure way to make them unhappy, which leads to an unhealthy work environment for them and others. It becomes like a poison introduced to the project and it infects every aspect of the work. Workload analysis can help to ensure the teams are working at capacity and sharing the load so that there’s no unnecessary jealousy.

Workload Analysis Templates

One way to do workload analysis is with templates. ProjectManager has dozens of free project management templates for Excel and Word that you can download to manage every aspect of your project, from start to finish. Here are a few to help with workload analysis.

Resource Plan Template

Before you can analyze workload you have to plan it. Our free resource plan template for Excel allows you to list your resources, the cost of those resources and then assign them on a calendar for weeks in advance.

Capacity Planning Template

Use our free capacity planning template for Excel to figure out how much production capacity you need to meet demand. It’s a useful tool for manufacturing, but any project can apply this free template for resource management.

Timesheet Template

Timesheets are one of the tools that assist in workload analysis. Our free timesheet template for Excel lists the days, dates, start time, lunch start and end times and more to track the amount of time spent working. There are also columns for regular hours, overtime and total pay.

How ProjectManager Helps With Workload Analysis

While templates can help with workload analysis, they’re always going to be dated. You have to manually update templates, but project management software is a more efficient way to manage workload. Our workload chart, as mentioned above, allows you to identify overallocation and underallocation in real time. But that’s only one piece of our larger resource management features.

Track Resource Allocation, Availability and Utilization

Our software has multiple project views that allow your team to work how they want. They also help with resource allocation. When a project manager is assigning resources on our robust Gantt charts they can see the availability of the team, including vacation time, PTO and even global holidays. This, with the workload chart and timesheets, can help them get the most out of their teams by tracking labor costs, allocation and more.

ProjectManager's timesheet
Monitor the Cost and Progress of Your Projects

Project managers need a tool to allow them to instantly see the progress of the project and the performance of their team. All they have to do is toggle over to our real-time dashboard and get a high-level overview of the project. Our dashboard is constantly updated with live data, which it then displays in easy-to-read graphs and charts that show project metrics, such as cost, time, workload and more. Unlike lightweight tools, our dashboard doesn’t require a complicated and time-consuming setup. It’s ready when you are.

ProjectManager's dashboard

Our software gives you all the resource management tools you need to manage your team’s workload and more. One of our multiple project views is a calendar that allows you to see your project plan month by month. You can add tasks, move tasks and use resource management features to help keep the project on track.

ProjectManager is award-winning project management software that connects teams in the office, out in the field or anywhere in between. They can share files, comment at the task level and stay updated with email notifications and in-app alerts. Join teams at companies as varied as Avis, Nestle and Siemens who are using our software to deliver successful projects. Get started with ProjectManager today for free.

The post Workload Analysis: Steps, Examples & Tools appeared first on ProjectManager.

The post Workload Analysis: Steps, Examples & Tools appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/28/workload-analysis-steps-examples-tools/feed/ 0
The art of conditional rendering: Tips and tricks for React and Next.js developers https://prodsens.live/2023/10/31/the-art-of-conditional-rendering-tips-and-tricks-for-react-and-next-js-developers/?utm_source=rss&utm_medium=rss&utm_campaign=the-art-of-conditional-rendering-tips-and-tricks-for-react-and-next-js-developers https://prodsens.live/2023/10/31/the-art-of-conditional-rendering-tips-and-tricks-for-react-and-next-js-developers/#respond Tue, 31 Oct 2023 03:24:59 +0000 https://prodsens.live/2023/10/31/the-art-of-conditional-rendering-tips-and-tricks-for-react-and-next-js-developers/ the-art-of-conditional-rendering:-tips-and-tricks-for-react-and-next.js-developers

Conditional rendering is a technique used in web applications to selectively render a component out of a set…

The post The art of conditional rendering: Tips and tricks for React and Next.js developers appeared first on ProdSens.live.

]]>
the-art-of-conditional-rendering:-tips-and-tricks-for-react-and-next.js-developers

Conditional rendering is a technique used in web applications to selectively render a component out of a set of candidates based on some condition, such as user authentication status, user privilege, or application state. It can also be used to implement a wide range of core React UI concepts, such as client-side routing and lazy loading.

In this article, you’ll learn about the benefits of conditional rendering, how it differs from conditional routing, and how to implement both in React, Next.js, and Remix.

Benefits of conditional rendering in React

In apps made using React or React-like frameworks, utilizing conditional rendering comes with a host of benefits, including faster load times.

The load time of a web application is influenced by the Document Object Model (DOM) size of the page being loaded. Keeping too many elements, especially those that are not displayed unless the user scrolls down, may lead to an unnecessarily large DOM size and reduce your application’s performance.

In this situation, lazy loading, a popular technique used to defer the loading of resources until needed, can be implemented to defer rendering a component unless the user scrolls down to bring it into the viewport. One of the most popular lazy loading libraries in React, react-lazyload, makes use of conditional rendering to render components only when they are scrolled into the viewport of the user’s browser or, in other words, are visible to the user.

In addition to performance, conditional rendering can also help improve user experience. It enables you to customize your UI based on the user’s properties, such as authentication state (ie a login button for unauthenticated users and a logout button for authenticated users in the same place on the screen) and access privilege (ie editing controls/detailed analytics for owners of content, such as tweets, while showing limited information for non-owners).

Moreover, conditional rendering can help you gracefully handle client-side operations, such as data fetching and communication with the backend. You can conditionally render a loading bar while an operation is carried out or hide an empty list while data is loaded from a remote source.

Conditional rendering also enables client-side routing in single-page React apps, such as the react-router package.

In summary, it’s safe to say that conditional rendering is one of the most important techniques used in React.

Conditional rendering vs. conditional routing

When discussing conditional rendering, the topic of conditional routing often comes up. Conditional routing is similar to conditional rendering in that it controls what the user gets to see based on conditions such as the user or application state. However, instead of rendering or destroying components, it navigates the user from one route (or page) to another. This is useful from an access control perspective.

Web apps can be accessed directly via URLs. This means that the URL of a page inside of your app, regardless of whether the page is public, can be captured and retained by users. In the case of privileged pages (nonpublic pages that are only accessible to users with certain roles, such as personal profile/preferences pages), the URLs can be misused to try to gain access to privileged content.

With the help of conditional routing, a React app can determine if an incoming request for a page render has the right privilege to access the route. If the privilege is not found, the render is denied, and the request is redirected to another page. The page the user is redirected to often asks to authenticate or informs the user that access has been denied.

For example, to experience conditional routing in action, try navigating to https://mail.google.com/mail in a new incognito window. When you do, you’ll be redirected to the Google sign-in page. That’s conditional routing working behind the scenes.

However, conditional routing in an SPA is not a guarantee for security. Even before your (single page) React app redirects unauthenticated users away, it still contains the contents of the privileged page in its JavaScript source code. This could lead to incidents of unauthorized access to privileged data. Therefore, it’s best to set up authentication-related conditional routing server side using frameworks such as Next.js and Remix.

Implementing conditional rendering in React

Now that you know more about conditional rendering, in the following section, you’ll learn how to implement it in a React app using one of the fundamental concepts of programming — the if…else statement.

To get started, create a new React app by running the following commands:

npx create-react-app conditional-rendering-react
cd conditional-rendering-react
npm install

Once the app is ready, paste the following code into the src/App.js file:

import { useState } from "react"

// Define two components
const LoginButton = (props) => {
  return <>
  You are logged out
  Log In

}

const LogoutButton = (props) => {
  return <>
  You are now logged in
  Log Out

}

function App() {
  // Define a state that the condition will be based upon
  const [loggedIn, setLoggedIn] = useState(false);

  // Define a function that can change the condition value
  const toggleLoggedIn = () => {
    setLoggedIn((currState) => {
        return !currState;
    })
  }

  // Use an if...else statement to render one out of two components based on the login state
  if (loggedIn)
    return 
  else
    return 
}

export default App;

This code has inline comments to explain how everything has been implemented.

Next, try running the app using the following command:

npm start

You’ll see a text and a button on the screen when you navigate to the app URL (ie http://localhost:3000):


Try clicking on the Log In button to change the state and trigger a re-render. You’ll notice that both the text and the button are changed with those applicable to a logged-in user:


And that’s it! That’s how easy it is to implement conditional rendering in React. Next, you’ll see how to do it in a Next.js app.

Implementing conditional rendering and conditional routing in Next.js

Next.js is an opinionated, production-focused, open source, React-based framework that helps you to build highly performant web apps. It was released by Vercel in 2016 and has since grown to become one of the top React-based frameworks in the web development industry.

Out of the box, Next.js supports server-side rendering and static rendering, and it provides easy-to-use solutions for common development problems, such as data fetching, routing, and server runtimes.

How to implement conditional rendering in Next.js

Because the syntax for JSX is consistent across Next.js and React, you can apply conditional rendering techniques in Next.js just as you would in React.

To get started in Next.js, create a new Next.js app by running the following command:

npx create-next-app conditionals-next

Follow the on-screen prompts to create your app. You can choose the default options (except for TypeScript, where you’ll need to select No). Once the project is created, paste the following code in the pages/index.js file:

import { useState } from "react";

// Define two components
const LoginButton = (props) => {
  return <>
    You are logged out
    Log In

}

const LogoutButton = (props) => {
  return <>
    You are now logged in
    Log Out

}

export default function Home() {
  // Define a state that the condition will be based upon
  const [loggedIn, setLoggedIn] = useState(false);

  // Define a function that can change the condition value
  const toggleLoggedIn = () => {
    setLoggedIn((currState) => {
        return !currState;
    })
  }

  // Use the ternary conditional operator to render one out of two components based on login state
  return loggedIn 
  ?  
  : 
}

This code contains inline comments to explain what’s happening at each step. A notable change here compared to the React example is that this example makes use of the ternary conditional operator in JavaScript to choose one out of two components when rendering. You can run the app using the following command:

npm run dev

Once the app starts, you’ll see something like this at http://localhost:3000:


Once again, you can try clicking on the Log In button to change the state:

How to implement conditional routing in Next.js

Implementing conditional routing in Next.js 10+ is also easy. To get started, create a new profile.js file in your conditionals-next/pages directory and paste the following code snippet in it:

// This is the profile page
const Profile = () => {
  return This is your profile
}

// This function is called on the server side before the page is rendered
// Redirecting from this method means that the user will not see the page even for a flash of a second (which often happens in client-side redirection)
export async function getServerSideProps(context) {
    const allowAccess = false; // This is usually deduced from the authentication state of the user

    if (!allowAccess) {

        console.log("Redirecting to home...")

        return {
            redirect: {
                permanent: false,
                destination: "/"
            }
        }
    } else
        return { props: {} }
}

export default Profile

Then try running the app again and navigating to http://localhost:3000/profile. You will be instantly redirected to the home page (ie the / route). You can check the terminal to find the redirection log:

# ... other output here
wait  - compiling...
event - compiled client and server successfully in 109 ms (199 modules)
wait  - compiling /profile (client and server)...
event - compiled client and server successfully in 114 ms (202 modules)
Redirecting to home...

If you change the value of allowAccess to true, you can view the profile page:

Implementing conditional rendering and conditional routing in Remix

Remix is another growing React-based framework for building web apps that focus on web standards and help you build better user experiences. Remix became open source in 2021 and has become a strong competitor of Next.js.

Remix is a fully server-side rendered framework and offers features such as built-in error boundaries and transition handling.

How to implement conditional rendering in Remix

Since Remix is based on React, you can implement conditional rendering in Remix similar to how you did in React and Next.js.

To get started, create a new Remix project by running the following command:

npx create-remix@latest

Follow the prompt to create your project. Make sure to choose JavaScript over TypeScript. Once the project is created, paste the following code snippet at app/routes/index.jsx:

import { useState } from "react";

// Define two components
const LoginButton = (props) => {
  return <>
    You are logged out
    Log In

}

const LogoutButton = (props) => {
  return <>
    You are now logged in
    Log Out

}

export default function Index() {
  // Define a state that the condition will be based upon
  const [loggedIn, setLoggedIn] = useState(false);

  // Define a function that can change the condition value
  const toggleLoggedIn = () => {
    setLoggedIn((currState) => {
        return !currState;
    })
  }

  // Use a switch statement to render one of two components based on login state
  switch(loggedIn) {
    case true: return 
    case false: return 
    default: return null
  }
}

This code snippet has inline comments to explain what’s happening at each step. Instead of using an if...else statement or a ternary conditional operator, this example uses a switch statement to decide which component to render.

Similar to the previous examples, here’s what the home page will look like at http://localhost:3000:


As you’ve done previously, you can try clicking on the Log In button to change the state:

How to implement conditional routing in Remix

Similar to how you implemented server-side conditional routing in Next.js, you can implement it in Remix as well. Create a new file, profile.jsx, in the conditionals-remix/app/routes directory and paste the following code in it:

import { redirect } from '@remix-run/node'

const Profile = () => {
    return This is your profile
}

export const loader = () => {
    const allowAccess = false; // This is usually deduced from authentication state of the user

    if (!allowAccess) {
        console.log("Redirecting to home...")

        return redirect('/')
    }

    return null
}

export default Profile

A loader function is used here instead of the getServerSideProps function since Remix supports loaders for loading data in components while they are being rendered on the server (which is similar to what getServerSideProps can do for Next.js). In addition, a redirect utility function (similar to the Next.js redirect function) is returned to initiate a redirect when allowAccess is set to false.

Navigate to http://localhost:3000/profile, and once again, you’ll be instantly redirected to the home page (ie the / route). You can check the terminal to find the redirection log:

# ... other output here
💿 Rebuilding...
💿 Rebuilt in 100ms
GET / 200 - - 17.179 ms
Redirecting to home...        #=== here it is
GET /profile 302 - - 7.107 ms
GET / 200 - - 6.564 ms

If you change the value of allowAccess to true, you’ll be able to view the profile page:

Additional considerations for conditional rendering

When working with conditional rendering, you may wonder how it impacts the virtual DOM. Do the various methods of conditional rendering perform equally? Does creating a new component or Higher Order Component (HOC) to abstract away the details of the conditional rendering impact your app’s DOM (and performance)? In this section, you’ll learn the answers to these questions.

Conditional rendering and JSX

The way in which you implement conditional rendering does not have an impact on your app’s DOM and performance. Consider the following example:

if (loggedIn)
  return Log Out
else
  return Log In

You can also write it like this using the ternary conditional operator:

return {loggedIn ? "Log Out" : "Log In"}

Both of these will reduce to one of the two components (ie the LoginButton and LogoutButton) being returned by the parent component and will create the exact same change in the DOM. You may argue that both of these could return different instances of the button tag, but that’s simply not true. This is because JSX elements, such as buttons and other tags, aren’t instances. They are merely a description or a skeletal outline of your final HTML structure.

As long as you return the same elements in the same structure, the impact on the DOM will be the same. This means you can freely abstract away rendering behind components as long as the JSX structure is the same.

This also gives birth to interesting situations, such as a single state being shared by two components that are exactly the same in structure, where only one of them is conditionally rendered in the same place in the DOM. You can learn more about this on the React website, which shows how large an impact a simple ternary operator in JSX can have on your app’s functioning.

Nested components and conditional rendering

Nesting the definition of components inside the components is considered an anti-pattern. One of the main reasons behind this is that a nested component is always recreated when its parent component rerenders, regardless of its position in the DOM tree. This behavior is different from what you saw before because the nested component’s definition in the runtime gets recreated on the rerender of the parent component. This is somewhat similar to a new instance of the nested component’s definition being created, therefore causing this issue.

Conclusion and further resources

While conditional rendering looks (and is) quite simple at a quick glance, it’s a very powerful technique to improve the performance and user experience of React apps. In this article, you learned about its benefits and how it’s different from conditional routing, as well as how to implement it in React, Next.js, and Remix.

To learn more about conditional rendering and how React handles the state across renders, make sure to check out the new React docs.

When working on React apps, you’ll often need to select and install npm packages to add additional functionalities (which include routing and state management, among others). Snyk Advisor can help you find the right npm package for your project while maintaining tight security standards for your app. Also, check out our list of 10 security best practices you should follow when working with React.

Snyk can also be used as an IDE extension to find insecure code in React codebases and can help you fix any security vulnerabilities in open source dependencies.

The post The art of conditional rendering: Tips and tricks for React and Next.js developers appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/10/31/the-art-of-conditional-rendering-tips-and-tricks-for-react-and-next-js-developers/feed/ 0
Create an AI Voice Generation App in 5 minutes 🧠✨ https://prodsens.live/2023/10/17/create-an-ai-voice-generation-app-in-5-minutes-%f0%9f%a7%a0%e2%9c%a8/?utm_source=rss&utm_medium=rss&utm_campaign=create-an-ai-voice-generation-app-in-5-minutes-%25f0%259f%25a7%25a0%25e2%259c%25a8 https://prodsens.live/2023/10/17/create-an-ai-voice-generation-app-in-5-minutes-%f0%9f%a7%a0%e2%9c%a8/#respond Tue, 17 Oct 2023 13:24:41 +0000 https://prodsens.live/2023/10/17/create-an-ai-voice-generation-app-in-5-minutes-%f0%9f%a7%a0%e2%9c%a8/ create-an-ai-voice-generation-app-in-5-minutes-

TL;DR This guide shows you how to build a web app in Go that uses ElevenLabs generative voice…

The post Create an AI Voice Generation App in 5 minutes 🧠✨ appeared first on ProdSens.live.

]]>
create-an-ai-voice-generation-app-in-5-minutes-

TL;DR

This guide shows you how to build a web app in Go that uses ElevenLabs generative voice AI to create lifelike speech from text. You can then deploy it to the cloud using Encore‘s free development cloud.

🚀 What we’re doing:

  • Install Encore & create an empty app
  • Download the ElevenLabs Encore package
  • Run the backend locally
  • Create a simple frontend
  • Deploy to the cloud

💽 Install Encore

Install the Encore CLI to run your local environment:

  • macOS: brew install encoredev/tap/encore
  • Linux: curl -L https://encore.dev/install.sh | bash
  • Windows: iwr https://encore.dev/install.ps1 | iex

🛠 Create your app

Create a new Encore application with this command and select the Empty app starter:

encore app create

💾 Download the ElevenLabs package

  1. Download the elevenlabs package directory from https://github.com/encoredev/examples/tree/main/bits/elevenlabs and add it to the app directory you just created.
  2. Sync your project dependencies by running go mod tidy. (Note: This requires that you have Go 1.21, or later, installed.)

Get your ElevenLabs API Key

You’ll need an API key from ElevenLabs to use this package. You can get one by signing up for a free account at https://elevenlabs.io.

Once you have the API key, save it as a secret using Encore’s secret manager with the name ElevenLabsAPIKey, by running:

encore secret set --type dev,prod,local,pr ElevenLabsAPIKey

🏁 Run your app locally

Start your application locally by running:

encore run

You can now open Encore’s local development dashboard at http://localhost:9400 to see your app’s API documentation, call the API using the API explorer and view traces, and more.

Encore local dev dash

🧐 Try out the API

Now let’s play around a bit with our shiny new API!

From the API Explorer in the local development dashboard, try calling the elevenlabs.DownloadAudio endpoint with the text input of your choice in the request body.

API Explorer

This will use the API to generate an MP3 audio file and download it to your app root folder: speech.mp3.

API Endpoints

Now that we know it works, let’s review the API endpoints in the elevenlabs package.

  • elevenlabs.ServeAudio: ServeAudio generates audio from text and serves it as mpeg to the client.
  • elevenlabs.StreamAudio: StreamAudio generates audio from text and streams it as mpeg to the client.
  • elevenlabs.DownloadAudio: DownloadAudio generates audio from text and saves the audio file as mp3 to disk.

🖼 Create a simple frontend

Now let’s make our app more user-friendly by adding a simple frontend.

  • Create a subfolder in your app root called frontend.
  • Inside /frontend, create frontend.go and paste the following code into it:
package frontend

import (
    "embed"
    "net/http"
)

var (
    //go:embed index.html
    dist embed.FS

    handler = http.StripPrefix("https://dev.to/frontend/", http.FileServer(http.FS(dist)))
)

// Serve serves the frontend for development.
// For production use we recommend deploying the frontend
// using Vercel, Netlify, or similar.
//
//encore:api public raw path=/frontend/*path
func Serve(w http.ResponseWriter, req *http.Request) {
    handler.ServeHTTP(w, req)
}
  • Inside /frontend, create index.html and paste the following code into it:

 lang="en">

   charset="UTF-8">
   name="viewport"
        content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
   http-equiv="X-UA-Compatible" content="ie=edge">
  </span>AI Speech Generator<span class="nt">
  


Encore + ElevenLabs AI Speech Generator

id="text-to-speech" cols="40" rows="5">Hello dear, this is your computer speaking. id="speak-button">Say it!

You should see this:

Local Running App

— Congratulations, you now have a working AI Voice Generator! 🎉

🚀 Deploy to the cloud

Deploy your app to a free cloud environment in Encore’s development cloud by running:

git push encore

👉 Then head over to the Cloud Dashboard to monitor your deployment and find your production URL by going the overview page for the environment you just created. It will be something like: https://staging-[APP-ID].encr.app.

Environment overview

Once you have your root API address and the deploy is complete, open your app in your browser: https://staging-[APP-ID].encr.app/frontend.

🎉 Great job – you’re done!

You now have a scalable and production-ready AI-powered backend running in the cloud.

Keep building with Encore using these Open Source App Templates. 👈

If you have questions or want to share your work, join the developer hangout in Encore’s community Slack. 👈

Learn More

The post Create an AI Voice Generation App in 5 minutes 🧠✨ appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/10/17/create-an-ai-voice-generation-app-in-5-minutes-%f0%9f%a7%a0%e2%9c%a8/feed/ 0
High-Performing Engineering Teams @ Meta/Facebook https://prodsens.live/2023/08/09/high-performing-engineering-teams-meta-facebook/?utm_source=rss&utm_medium=rss&utm_campaign=high-performing-engineering-teams-meta-facebook https://prodsens.live/2023/08/09/high-performing-engineering-teams-meta-facebook/#respond Wed, 09 Aug 2023 10:25:02 +0000 https://prodsens.live/2023/08/09/high-performing-engineering-teams-meta-facebook/ high-performing-engineering-teams-@-meta/facebook

High-performing teams exhibit several key characteristics that contribute to their success. These characteristics are often seen in teams…

The post High-Performing Engineering Teams @ Meta/Facebook appeared first on ProdSens.live.

]]>
high-performing-engineering-teams-@-meta/facebook

High-performing teams exhibit several key characteristics that contribute to their success. These characteristics are often seen in teams that consistently achieve their goals, collaborate effectively, and produce outstanding results. Here are some of the common characteristics of high-performing teams:

Join Me

Follow me on #TheEngineeringBolt, Twitter and Linkedin for more Career, Leadership and Growth advice.

1. Clear Goals and Objectives

High-performing teams have a shared understanding of their goals and objectives. These goals are specific, measurable, achievable, relevant, and time-bound (SMART), providing a clear direction for the team’s efforts.

2. Strong Leadership

Effective leadership is crucial for guiding the team, making decisions, and resolving conflicts. Leaders in high-performing teams lead by example, inspire team members, and foster an environment of trust and respect.

3. Open Communication

Members of high-performing teams communicate openly and honestly. They share information, ideas, and feedback freely, promoting a culture of transparency and effective collaboration.

4. Collaboration

Team members work collaboratively, leveraging each other’s strengths and expertise. They recognize the value of diverse perspectives and contribute to the team’s success by actively participating and sharing their knowledge.

5. Trust and Mutual Respect

Trust is a foundational element of high-performing teams. Team members trust each other’s capabilities, intentions, and commitments. They treat each other with respect and empathy, fostering a positive and supportive atmosphere.

6. Clear Roles and Responsibilities

Each team member understands their role and responsibilities within the team. This clarity helps prevent confusion, overlaps, and gaps in work, contributing to smoother workflows.

7. Accountability & Adaptability

Team members take ownership of their tasks and commitments. They hold themselves and each other accountable for meeting deadlines and delivering high-quality work.

High-performing teams are adaptable and responsive to changing circumstances. They are willing to adjust their strategies and approaches when necessary, without losing sight of their goals.

8. Continuous Learning

Team members are committed to their personal and professional growth. They seek opportunities to learn, improve their skills, and stay updated on relevant industry trends.

9. Conflict Resolution

Conflict is addressed openly and constructively within high-performing teams. Team members are skilled at resolving disagreements in a respectful manner, focusing on finding solutions that benefit the team as a whole.

10. Innovation

These teams encourage creativity and innovation. They are open to new ideas and encourage members to think outside the box, fostering an environment where innovative solutions can thrive.

11. Recognition and Reward

Achievements and contributions are acknowledged and celebrated within the team. This recognition reinforces positive behavior and motivates team members to continue performing at their best.

12. Resource Allocation

High-performing teams have access to the necessary resources, tools, and support to carry out their tasks effectively. Adequate resource allocation helps prevent unnecessary obstacles to success.

13. Results-Oriented

Ultimately, high-performing teams are focused on delivering results. They consistently achieve their goals and produce high-quality outcomes that align with their objectives.

Creating and sustaining a high-performing team requires ongoing effort, effective leadership, and a commitment to fostering a positive team culture. It’s important to note that while these characteristics are aspirational, the specific dynamics of each team and its context can influence how these characteristics manifest.

Join Me

Follow me on #TheEngineeringBolt, Twitter and Linkedin for more Career, Leadership and Growth advice.

Lightning Tweets about Engineering and Software Delivery at Meta

The post High-Performing Engineering Teams @ Meta/Facebook appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/08/09/high-performing-engineering-teams-meta-facebook/feed/ 0
Marketing Moves That Helped Skims Reach A $4 Billion Valuation https://prodsens.live/2023/07/26/marketing-moves-that-helped-skims-reach-a-4-billion-valuation/?utm_source=rss&utm_medium=rss&utm_campaign=marketing-moves-that-helped-skims-reach-a-4-billion-valuation https://prodsens.live/2023/07/26/marketing-moves-that-helped-skims-reach-a-4-billion-valuation/#respond Wed, 26 Jul 2023 12:25:35 +0000 https://prodsens.live/2023/07/26/marketing-moves-that-helped-skims-reach-a-4-billion-valuation/ marketing-moves-that-helped-skims-reach-a-$4-billion-valuation

Earlier this month, Kim Kardashian’s apparel company Skims made headlines after closing a $270 million fundraising round, reaching…

The post Marketing Moves That Helped Skims Reach A $4 Billion Valuation appeared first on ProdSens.live.

]]>
marketing-moves-that-helped-skims-reach-a-$4-billion-valuation

Earlier this month, Kim Kardashian’s apparel company Skims made headlines after closing a $270 million fundraising round, reaching a valuation of $4 billion. In a business climate where VCs are pulling back investment from direct-to-consumer brands, this accomplishment is quite the feat for a four-year-old business.

Skims initially launched in 2019, offering shapewear and intimates. The company has since expanded its product line to include clothing, loungewear, swimwear, and kid’s items. This year, Skims is on track to bring in $750 million in sales, up 50% from 2022.

The brand experienced mega-growth through smart positioning in front of its ideal customers, 70% of whom are millennials and Gen Z. Continue reading to learn the marketing moves that helped Skims reach unicorn status four times over.

Skims Marketing Strategies

Winning on Social

Instagram is Skims’ top social media channel. With over 5.2 million followers, the brand often takes to Instagram to announce new product launches and social campaigns.

Looking through the comments on the Skims Instagram account, it also appears to be a customer service channel where buyers give feedback about what products they’d like to see and inquire about when products will be back in stock.

It’s also worth noting the popularity of Skims products on TikTok. Though the official Skims account on TikTok has 1.1 million followers, the brand’s main marketing power on the platform is seen through word of mouth and user-generated content.

@callahanrahm skims if you’re seeing this, your email marketing is working 😝
#skimshaul
#skimscottoncollection
#skimsloungeset
♬ original sound – Callahan Rahm

Each day users post their hauls and reviews of Skims products on TikTok, generating buzz for the brand. Users sharing Skims “dupes” is another popular niche on TikTok, which still drives notoriety and demand for the original products.

Email Marketing That Converts

In addition to social, Skims also uses email marketing to drive sales. On average, the brand sends five emails per week to its list announcing new product drops, restocks of popular products, retail collaborations, and direct CTAs to purchase.

skims email marketing example

Image Source

Scarcity Marketing

The more rare an item is, the higher its perceived value. This is the law of supply and demand from your Econ 101 class — a key factor in scarcity marketing that Skims has leveraged to drive sales.

The brand has been known to drop new and popular items in limited quantities making coveted items sell out quickly. This drives those who missed out on the initial run to begin following the brand more closely so they can be alerted when the product is back in stock, encouraging them to purchase as soon as the product is available.

Strategic Partnerships and Endorsements

Skims has participated in several partnerships that have gotten people talking. Its 2021 co-branded collection with Fendi brought in $1 million in sales just one minute after it launched. The brand also parted with the Olympics in 2020 and 2022, outfitting Team USA in a capsule collection of loungewear and sleepwear.

skims olympics partnership

Image Source

The endorsements have also found their way into Skims advertising.

Earlier this year two popular actors from the television show The White Lotus, Simona Tabasco and Beatrice Grannò, starred in a campaign promoting Skims’ Valentine’s Day collection shortly after the show’s finale.

Last year, the brand ran a campaign featuring former Victoria’s Secret models from the 90s and 2000s — a move that played up nostalgia marketing for the brand’s millennial customers who grew up shopping at Victoria’s Secret and watching models walk in the controversial brand’s fashion shows.

While Kardashian’s influence has positively impacted Skims’ business success, the brand’s future wouldn’t be as bright without well-executed, multi-channel marketing. With this latest round of fundraising, Skims is well-positioned to expand its product line and potentially IPO.

New call-to-action

The post Marketing Moves That Helped Skims Reach A $4 Billion Valuation appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/07/26/marketing-moves-that-helped-skims-reach-a-4-billion-valuation/feed/ 0
API Rate Limiting Cheat Sheet https://prodsens.live/2023/04/15/api-rate-limiting-cheat-sheet/?utm_source=rss&utm_medium=rss&utm_campaign=api-rate-limiting-cheat-sheet https://prodsens.live/2023/04/15/api-rate-limiting-cheat-sheet/#respond Sat, 15 Apr 2023 00:05:44 +0000 https://prodsens.live/2023/04/15/api-rate-limiting-cheat-sheet/ api-rate-limiting-cheat-sheet

Jump to a section: Gateway-level rate limiting Token bucket algorithm Leaky bucket algorithm Sliding window algorithm Distributed rate…

The post API Rate Limiting Cheat Sheet appeared first on ProdSens.live.

]]>
api-rate-limiting-cheat-sheet

Jump to a section:

  • Gateway-level rate limiting
  • Token bucket algorithm
  • Leaky bucket algorithm
  • Sliding window algorithm
  • Distributed rate limiting
  • User-based rate limiting
  • API key rate limiting
  • Custom rate limiting

Gateway-level rate limiting

  • Gateway-level rate limiting is a popular approach to rate limiting that allows developers to set rate limits at the gateway level.
  • Gateway-level rate limiting is typically implemented in API gateways such as Kong, Google’s Apigee, or Amazon API Gateway.
  • Gateway-level rate limiting can provide simple and effective rate limiting, but may not offer as much fine-grained control as other approaches.

Token bucket algorithm

Token bucket algorithm
image source

  • The token bucket algorithm is a popular rate limiting algorithm that involves allocating tokens to API requests.
  • The tokens are refilled at a set rate, and when an API request is made, it must consume a token.
  • If there are no tokens available, the request is rejected.
  • The token bucket algorithm is commonly used in many rate limiting libraries and tools, such as rate-limiter, redis-rate-limiter, and the Google Cloud Endpoints.

More: Token Bucket vs Bursty Rate Limiter by @animir

Leaky bucket algorithm

Leaky bucket algorithm
image source

  • The leaky bucket algorithm is similar to the token bucket algorithm, but instead of allocating tokens, API requests are added to a “bucket” at a set rate.
  • If the bucket overflows, the requests are rejected.
  • The leaky bucket algorithm can be useful for smoothing out request bursts, and for ensuring that requests are processed at a consistent rate.

Sliding window algorithm

Sliding window algorithm
image source

  • The sliding window algorithm is a rate limiting approach that involves tracking the number of requests made in a sliding window of time.
  • If the number of requests exceeds a set limit, further requests are rejected.
  • The sliding window algorithm is commonly used in many rate limiting libraries and tools, such as Django Ratelimit, Express Rate Limit, and the Kubernetes Rate Limiting.

More: Rate limiting using the Sliding Window algorithm by @satrobit

Distributed rate limiting

  • For high-traffic APIs, it may be necessary to implement rate limiting across multiple servers.
  • Distributed rate limiting algorithms such as Redis-based rate limiting or Consistent Hashing-based rate limiting can be used to implement rate limiting across multiple servers.
  • Distributed rate limiting can help to ensure that rate limiting is consistent across multiple servers, and can help to reduce the impact of traffic spikes.

In this example, we’ll create a simple Next.js application with a rate-limited API endpoint using Redis and Upstash. Upstash is a serverless Redis database provider that allows you to interact with Redis easily and cost-effectively.

First, let’s create a new Next.js project:

npx create-next-app redis-rate-limit-example
cd redis-rate-limit-example

Install the required dependencies:

npm install upstash-redis@0.4.4 ioredis@4.27.6 express-rate-limit@5.3.0

Create a .env.local file in the project root to store your Upstash Redis credentials:

UPSTASH_REDIS_URL=your_upstash_redis_url_here

Replace your_upstash_redis_url_here with your actual Upstash Redis URL.

Create a new API route in pages/api/limited.js:

import { connectRedis } from '../../lib/redis';
import rateLimit from 'express-rate-limit';
import { createError } from 'micro';

const redisClient = connectRedis();

const rateLimiter = rateLimit({
  store: new RedisStore({
    client: redisClient,
  }),
  windowMs: 60 * 1000, // 1 minute
  max: 5, // limit each IP to 5 requests per minute
  handler: (req, res) => {
    res.status(429).json({ message: 'Too many requests, please try again later.' });
  },
});

export default async function handler(req, res) {
  try {
    await rateLimiter(req, res);
  } catch (error) {
    if (error instanceof createError.HttpError) {
      return res.status(error.statusCode).json({ message: error.message });
    }
    res.status(500).json({ message: 'Internal server error' });
  }

  res.status(200).json({ message: 'Success! Your request was not rate-limited.' });
}

export const config = {
  api: {
    bodyParser: false,
  },
};

Create a lib/redis.js file to handle Redis connections:

import Redis from 'ioredis';

let cachedRedis = null;

export function connectRedis() {
  if (cachedRedis) {
    return cachedRedis;
  }

  const redis = new Redis(process.env.UPSTASH_REDIS_URL);
  cachedRedis = redis;
  return redis;
}

Create a new RedisStore class in lib/redis-store.js:

import { connectRedis } from './redis';

export class RedisStore {
  constructor({ client } = {}) {
    this.redis = client || connectRedis();
  }

  async get(key) {
    const data = await this.redis.get(key);
    return JSON.parse(data);
  }

  async set(key, value, ttl) {
    await this.redis.set(key, JSON.stringify(value), 'EX', ttl);
  }

  async resetKey(key) {
    await this.redis.del(key);
  }
}

Now you can test your rate-limited API endpoint by starting the development server:

npm run dev

Visit http://localhost:3000/api/limited in your browser or use a tool like Postman or curl to make requests. You should see the Success! Your request was not rate-limited. message. If you make more than 5 requests within a minute, you’ll receive the rate limit message:

Too many requests, please try again later.

User-based rate limiting

  • Some APIs may require rate limiting at the user level, rather than the IP address or client ID level.
  • User-based rate limiting involves tracking the number of requests made by a particular user account, and limiting requests if the user exceeds a set limit.
  • User-based rate limiting is commonly used in many API frameworks, such as Django Rest Framework, and can be implemented using session-based or token-based authentication.

API key rate limiting

  • For APIs that require authentication with an API key, rate limiting can be implemented at the API key level.
  • API key rate limiting involves tracking the number of requests made with a particular API key, and limiting requests if the key exceeds a set limit.
  • API key rate limiting is commonly used in many API frameworks, such as Flask-Limiter, and can be implemented using API key-based authentication.

Custom rate limiting

  • Finally, it’s worth noting that there are many other rate limiting approaches that can be customized to suit the needs of a particular API.
  • Some examples include adaptive rate limiting, which adjusts the rate limit based on the current traffic load, and request complexity-based rate limiting, which takes into account the complexity of individual requests when enforcing rate limits.
  • Custom rate limiting approaches can be useful for optimizing the rate limiting strategy for a specific API use case.

For my latest project Pub Index API I am making use of an API gateway for rate-limiting.

More: RESTful API Design Cheatsheet

The post API Rate Limiting Cheat Sheet appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/04/15/api-rate-limiting-cheat-sheet/feed/ 0
Unit Tests Should NOT Be In Your Coding Evaluations https://prodsens.live/2023/03/15/unit-tests-should-not-be-in-your-coding-evaluations/?utm_source=rss&utm_medium=rss&utm_campaign=unit-tests-should-not-be-in-your-coding-evaluations https://prodsens.live/2023/03/15/unit-tests-should-not-be-in-your-coding-evaluations/#respond Wed, 15 Mar 2023 17:02:02 +0000 https://prodsens.live/2023/03/15/unit-tests-should-not-be-in-your-coding-evaluations/ unit-tests-should-not-be-in-your-coding-evaluations

I’ve remained silent on this topic for far too long. But now I’m about to go off. Buckle…

The post Unit Tests Should NOT Be In Your Coding Evaluations appeared first on ProdSens.live.

]]>
unit-tests-should-not-be-in-your-coding-evaluations

I’ve remained silent on this topic for far too long. But now I’m about to go off. Buckle up…

In the last few weeks I’ve had some experiences with unit tests during coding evaluations that have left me exasperated and infuriated. This isn’t the first time that I’ve run into these types of issues. But I’m finally fed-up enough to proclaim loudly and proudly that:

Unit tests should have no place in the coding evaluations that you foist upon prospective candidates.

That may sound like heresy to some of you. (HINT: I don’t care.) But if it really bothers you that much, there’s a good chance that you’re part of the problem.

Image description

I’m not a heretic

Before I explain exactly why unit tests should play no part in your coding evaluations, I want to make it clear that I’m not taking a stand against unit testing in general. Of course you should be using unit tests in your dev environment. Of course you can vastly improve the quality of your coding efforts through copious unit testing. Of course you want all of your devs to be comfortable with unit testing.

So of course you should be including it in all of your coding evaluations… Right???

NO.

Allow me to explain…

Image description

Writing tests wastes time and can be disrespectful to the candidate

During the normal day-to-day operations in your dev environment, it’s typically unacceptable for someone to say, “I finished this task and submitted the work – but I didn’t write any unit tests because I simply ran out of time.” During normal “dev operations”, a task isn’t actually done until you’ve accounted for proper testing. But this standard should not apply during coding evaluations.

Most coding evaluations are timed. They give a set period in which you must fix a bug or build a feature or alter an existing feature in some way. Often, these time constraints can feel daunting – even to senior devs.

Imagine you have one hour to build some feature in React. Maybe it’s not that hard. Maybe you’re feeling pretty confident that you can knock this out. But in the middle of building it, you run into some kinda bug. You know… the kind where you sit there for a few minutes and just think, “Wait… What the heck’s going on?” Maybe you forgot to hook up a key handler and it takes you some time to realize what you’ve overlooked. Maybe you just made some stupid typo that’s not immediately apparent in your IDE. Regardless, the point is that, even in the simplest of tasks, sometimes you can “burn” 10-15 minutes rectifying something that you screwed up.

Eventually, you fix it. And you go on to build the complete feature right under the hour deadline. In fact, you built it well. You’re feeling pretty confident about the code that you cranked out.

But you didn’t get time to add any tests…

If your code is solid, and you completed the task, and you demonstrated a solid understanding of React, there’s no way in hell you should ever be marked down (or eliminated from contention) merely because you didn’t also have the time to slap unit tests onto your solution.

Let me be absolutely clear here. The mere act of writing a unit test is generally quite easy. Most of the testing frameworks have very similar syntaxes and they’re designed to help you write tests in a way that makes semantic sense. They use verbiage like:

it('Should enqueue the items to the queue', (done) => {...});

and

onDequeueSpy.calledOnce.should.be.true;

So it should feel fairly natural (to any established dev) to write tests in this manner.

But even though they can be syntactically self-explanatory, it can still take a little… nuance (and nuance equates to: time) to add unit tests that are actually meaningful and properly account for edge cases. The time it takes to implement these should not be a burden in your normal dev cycle. But it’s absolutely a burden when you’re staring down a timer during a coding evaluation.

A few weeks ago I completed a coding test where they wanted me to add a lot of features to an existing React codebase. And they told me that it should all be done… in 45 minutes. It wasn’t just that I had to add new components and get all the event handlers hooked up, but I had to be careful to do it in the exact style that already existed in the rest of the codebase. Furthermore, there were numerous CSS requirements that dictated precisely how the solution should look. So it wasn’t enough just to get the logic working. I also had to get everything matching the design spec. And again, I had to do that all in 45 minutes.

But of course, this wasn’t all they wanted. The requirements also said that all existing tests should pass and I should write new tests for any additional features that were added. And I was supposed to do all of that in 45 minutes. It was patently ridiculous.

If I’ve coded up a fully-functioning solution that meets the task’s requirements, but I didn’t get time to put proper unit tests on my new features, and you still want to eliminate me from contention, then… good. Believe me when I say that I don’t wanna work on your crappy dev team anyway.

But these aren’t the only problems with unit tests in coding challenges…

Image description

Broken tests

So maybe you’re not asking me to write tests. Maybe you just have a buncha unit tests in the codebase that need to pass before I can submit my solution? Sounds reasonable, right?

Well…

On numerous occasions, I’ve opened up a new codebase, in which I’m supposed to write some new solution, and found that the tests don’t pass out-of-the-box. Granted, if the “test” is that the codebase contains a bug, and the unit tests are failing because of that bug, and you now want me to fix that bug, then… OK. Fine. I get it.

But I’ve encountered several scenarios where I was supposed to be building brand new functionality. Yet when I open the codebase and run the tests that only exist to verify the legacy functionality – they fail. So then I spend the first 15 minutes of my precious evaluation time figuring out how to fix the tests on the base functionality, before I’ve even have a chance to write a single line of my new solution.

If this is the kinda test you wanna give me, then I don’t wanna work on your crappy dev team anyway.

Image description

Secret requirements

Here’s another delightful headache I’ve run into from your embedded unit tests: You’re not asking me to write new unit tests, but you’ve loaded a whole bunch of tests into the codebase that are designed to determine whether the new feature I’ve built performs according to the spec. So I carefully read through all the instructions, and I crank out a solution that satisfies all of the instructions, and it runs beautifully in a browser, but then I run the tests…

And they fail.

Then I go back and re-read all of the instructions, even more carefully than I did the first time. And – lo and behold – I’ve followed the instructions to a tee. But the unit tests still FAIL.

How do I remedy this? Well, I’ve gotta open up the unit test files and start working backward from the failures, trying to figure out why my instructions-compliant solution still fails your unit tests. That’s when I realize that the unit tests contain secret requirements.

For example, I ran into this scenario just yesterday. The React task had many features that should only display conditionally. Like, when the API returns no results, you should display a “No results found”

. But that div should not display if the API did in fact return results. And I coded it up to comply with that requirement. But the test still failed.

Why did it fail? Because the test was looking, hyper-specifically, for the “No results”

to be NULL. I coded it to use display: none. The original requirement merely stated that the

should not be displayed. It never stated that the resulting

must in fact be NULL. So to get the test to pass, I had to go back into my solution (the one that perfectly complied with the written instructions), and change the logic so it would NULL-out the

.

I had to do the same for several other elements that had similar logic. Because those elements also had their own unit tests – that were all expecting an explicit value of NULL.

If this had been made clear to me in the instructions, then I would’ve coded it that way from the beginning. But it was never stated as such in the instructions. So I had to waste valuable time in the coding test going back and refactoring my solution. I had to do this because the unit tests contained de facto “secret requirements”.

If you can’t be bothered to ensure that the unit tests in your coding challenge don’t contain secret requirements, then I absolutely have no desire to work on your crappy dev team anyway.

Image description

Illogical unit tests

Maybe you’re not asking me to crank out new unit tests, and maybe you’re not hiding “secret requirements” in your unit tests, and maybe all of the tests tied to the legacy code work just fine out-of-the-box. So that shouldn’t be any problem for me to comply with, right??

Umm…

Yesterday I was coding an asynchronous queue in Node and I ran into this gem of a unit test:

it('Should dequeue items every 250ms from the queue', (done) => {
    queue.enqueue(1);
    queue.enqueue(2);
    queue.start();
    queue.getCurrentInterval().should.eql(250);
    var onDequeueSpy = sinon.spy();
    queue.on('dequeued', onDequeueSpy);
    setTimeout(() => {
        onDequeueSpy.calledOnce.should.be.true;
        onDequeueSpy.firstCall.args[0].should.eql(1);
    }, 260);
    setTimeout(() => {
        onDequeueSpy.calledTwice.should.be.true;
        onDequeueSpy.getCall(1).args[0].should.eql(2);
        done();
    }, 510);
});

Here’s what this unit test does:

  1. It adds two items to the queue with queue.enqueue().

  2. It starts the dequeuing process with queue.start().

  3. It ensures that the interval is set to the default value of 250 milliseconds.

  4. It then waits 260 milliseconds and checks that the queue has only been called once. (This PASSES.)

  5. It then waits exactly 250 additional milliseconds.

  6. It then checks that the queue has been called exactly twice.

By setting the wait time to be exactly 250 milliseconds after the first check, it sets up a race condition whereby this test intermittently fails roughly 50% of the time.

You may be thinking, “But, the second check should happen 510 milliseconds after the queue has been started, which should allow the queue to have been fired twice.” But since this only allows 10 milliseconds of leeway, it leads to intermittent scenarios where sometimes the test passes – and sometimes it fails.

Of course, one easy way to “fix” this issue is to give the second call a little more breathing room. But you can’t update the test files. If you try to do so, your git push is blocked.

I played around with this for a long time, trying to get it to consistently work without altering the test files. To no avail.

Here’s another jewel from the same coding challenge:

it('Should continue to listen for new data even on pausing the dequeue process', (done) => {
    var onDequeueSpy = sinon.spy();
    var onEnqueueSpy = sinon.spy();
    queue.on('dequeued', onDequeueSpy);
    queue.start();
    queue.enqueue(2);
    queue.enqueue(3);
    queue.enqueue(4);
    queue.pause();
    setTimeout(() => {
        onDequeueSpy.callCount.should.eql(0);
        queue.start();
        queue.emit('interval', 50);
        queue.on('enqueued', onEnqueueSpy);
        queue.enqueue(95);
        queue.enqueue(110);
    }, 260);
    setTimeout(() => {
        queue.enqueue(221);
        onEnqueueSpy.callCount.should.eql(3);
        onDequeueSpy.callCount.should.eql(1);
        queue.print().should.eql([3, 4, 95, 110, 221]);
        done();
    }, 320);
});

This test FAILS on this line:

    onDequeueSpy.callCount.should.eql(1);

Here’s why it fails. The queue is started with a default value of 250 milliseconds. Then it pauses the queue. Then, in the first setTimeout(), it re-starts the queue. And then it sets the interval to 50 milliseconds. But it doesn’t set the interval to 50 milliseconds until after it re-started the queue. And when it restarts the queue, it will initially start with a delay of 250 milliseconds.

This means that the first 250 millisecond delay will need to be run in full before the next queue call can run with a delay of 50 milliseconds.

Of course, it doesn’t matter that the unit test itself was janky AF. All that matters to the evaluator is that the (illogical) test didn’t pass.

In the end, I suppose it’s a good thing that they use these illogical tests, because guess what? I sure-as-hell don’t want to work on their crappy dev team anyway. But I’m still annoyed as hell because I wasted hours of my time yesterday doing their challenge, after I’d already had a great live interview with them and after I’d already coded a working solution, because they can’t be bothered to write logical unit tests.

The post Unit Tests Should NOT Be In Your Coding Evaluations appeared first on ProdSens.live.

]]> https://prodsens.live/2023/03/15/unit-tests-should-not-be-in-your-coding-evaluations/feed/ 0 13 Types of Customer Feedback: How to Collect Valuable Feedback As a Product Manager https://prodsens.live/2023/03/01/13-types-of-customer-feedback-how-to-collect-valuable-feedback-as-a-product-manager/?utm_source=rss&utm_medium=rss&utm_campaign=13-types-of-customer-feedback-how-to-collect-valuable-feedback-as-a-product-manager https://prodsens.live/2023/03/01/13-types-of-customer-feedback-how-to-collect-valuable-feedback-as-a-product-manager/#respond Wed, 01 Mar 2023 10:05:43 +0000 https://prodsens.live/2023/03/01/13-types-of-customer-feedback-how-to-collect-valuable-feedback-as-a-product-manager/ 13-types-of-customer-feedback:-how-to-collect-valuable-feedback-as-a-product-manager

If you’re wondering what different types of customer feedback are and how to collect them, you’re in the…

The post 13 Types of Customer Feedback: How to Collect Valuable Feedback As a Product Manager appeared first on ProdSens.live.

]]>
13-types-of-customer-feedback:-how-to-collect-valuable-feedback-as-a-product-manager

If you’re wondering what different types of customer feedback are and how to collect them, you’re in the right place!

In this article, we look at 13 different ways to collect feedback that product managers can leverage to gain a deep understanding of customer needs and make informed product decisions.

Let’s dive right into it!

TL;DR

What is customer feedback?

Customer feedback is all the information and opinions that you get from your customers about their experiences with the product.

This includes positive feedback, which highlights the aspects of the product that satisfy user needs, as well as negative customer feedback. The latter focuses on those parts of the user experience which don’t live up to their expectations.

Active feedback vs passive feedback

A solid customer feedback strategy must include active and passive feedback methods.

What is active feedback?

Active feedback is the kind of feedback that you solicit actively from your users. For example, this could be through regular NPS surveys or feature surveys triggered in-app.

Active customer feedback
Active customer feedback example from Slack.

What is passive feedback?

Passive feedback is a kind of on-demand feedback.

Whenever users would like to share their opinions, request a feature or report a bug, they can do it via a feedback widget located in your resource center or in a visible place on your product dashboard.

Passive customer feedback
Passive customer feedback example from Slack.

13 Types of customer feedback and how to collect them

There are multiple types of customer feedback. For example, positive and negative customer reviews on public sites like G2 or Captera can be invaluable sources of information.

However, in this article, we’ll be focusing only on the feedback that’s relevant to product managers.

User onboarding feedback

User onboarding feedback helps you evaluate the quality of the onboarding experience you’re offering to your users.

This type of customer feedback allows you to personalize the onboarding experience for your users, remove friction from the process and reduce the time to value.

How to collect user onboarding feedback?

Onboarding feedback is normally collected via in-app surveys at the end of the onboarding process. For example, they could be triggered by an activation event.

To increase the response rate, the survey should consist of a short question focusing on overall satisfaction with the experience. You should also follow it up with a qualitative question that will help you understand better how they’re feeling about the onboarding process.

Customer onboarding feedback form
Customer onboarding feedback form.

Product feedback

Product feedback helps you improve your product so that it fully satisfies user needs and delivers a more satisfying experience. Acting on product feedback will not only help you retain existing customers but will make it more attractive to prospective ones.

How to collect product feedback?

Popular product feedback collection methods include customer interviews, focus groups, and usability testing.

Each of these methods has its strengths and limitations, so ideally you should use a combination of them to get a more complete picture.

For example, usability testing can help you make the UI more intuitive while user interviews can help you understand why users act in a particular way.

Customer interview template
Customer interview template.

Feature requests

Feature requests give you ideas on how you can make the product more competitive and delightful for your users.

Apart from that they can also be an indication of inadequate product onboarding.

How so?

If users ask for a feature that you already have or one with existing functionality, it means it’s relevant to their use cases and yet they’ve not been able to discover it.

How to collect feature requests?

The easiest way to collect feature requests is via a feature request survey or the passive feedback widget. Customer-facing roadmaps are another good way to do so. You can create them in a project management tool like Trello and share them with your users.

Feature request form
Feature request form.

Customer preference feedback

Customer preference feedback gives you insights into what customers like and what they don’t.

Such knowledge can help you make more accurate product development decisions. If your users prefer one UI design or one feature over another, that’s what you should prioritize in your backlog.

How to collect customer preference feedback?

Focus groups are one popular way to collect customer preference feedback. However, they are time-consuming to organize and the outcomes could be affected by groupthink.

In-app surveys are way easier and cheaper to run and you can target a wider consumer base to get more representative opinions.

Customer preference feedback survey
Customer preference feedback survey.

In-app rating feedback

In-app rating feedback is a quick way to evaluate user satisfaction with a particular experience without leaving the app.

For example, you could use it to ask users to evaluate a template or a feature they’ve just used. Such feedback shows not only how valuable the feature is for the user but if you can find a way to share it with other users, for example in your app store, it can increase its visibility.

How to collect in-app ratings?

In-app rating feedback is easy to collect. All you have to do is create a pop-up modal with a simple rating scale.

In-app rating feedback modal
In-app rating feedback modal.

Customer loyalty feedback

Customer loyalty feedback enables teams to assess how emotionally attached your customers are to your product or brand.

This kind of feedback is valuable because it helps you estimate how likely your customers are to stay with you, or even actively promote your product.

How to collect customer loyalty feedback?

Net Promoter Score (NPS) surveys are a common way to collect customer loyalty feedback.

This kind of user feedback survey asks customers how likely on a scale from 1 to 10 they are to recommend the product to their friends or colleagues. Based on the answers, it classifies users into promoters, passives, and detractors.

It’s good practice to follow up on this with a qualitative question to ask for more detailed insights. Ideally, the tool you use should let you tag and analyze the qualitative NPS answers so that you can segment your users effectively and spot trends in their opinions.

nps-survey-in-userpilot
Create NPS surveys with Userpilot to collect customer loyalty feedback.

Customer satisfaction surveys

Customer satisfaction feedback tells you how happy your users are with the product experience and the value it offers.

Such surveys can highlight areas that need improving in your product and opportunities to add value and delight.

How to collect customer satisfaction feedback?

To measure customer satisfaction, we use surveys that are very similar to NPS ones. You ask your users how satisfied they are with the experience and give them a Likert scale to rate it.

Ideally, you should trigger this contextually just after the experience so that it’s still fresh in their minds. This will give you more accurate results and will increase the response rate.

Customer satisfaction feedback survey
Userpilot customer satisfaction feedback survey.

Customer experience feedback

Customer experience feedback is a kind of satisfaction feedback. It focuses specifically on user interactions with your product at different touchpoints in the customer journey. It helps you evaluate how easy it is for the customer to complete their tasks with the product.

How to collect customer experience feedback?

Product managers often use Customer Effort Score (CES) surveys to collect customer experience feedback. The survey basically asks users to rate how easy it was for them to accomplish their objective.

You can run these surveys when the users reach a milestone in the customer journey or use a feature for the first time.

ces-survey-created-in-userpilot
CES survey.

Product viability feedback using PMF survey

Product viability feedback allows you to assess how successful your product is at satisfying a genuine market need.

For products to be viable, there need to be enough customers ready to pay for a product that solves their problem. That’s how you achieve product-market fit.

Finding the product-market fit.

How to collect product viability feedback?

To collect viability feedback, teams use PMF surveys, also known as Sean Ellis tests.

The test is a bit subversive. Instead of asking how happy customers are with the feature or product, it asks them how disappointed they’d be if they couldn’t use them again. If 40% of your users respond ‘very disappointed’, you’ve got it. If not, you need to keep iterating on your MVP.

PMF survey to capture viability feedback
PMF survey to capture viability feedback.

Bug reports

Bug reports tell you about the technical issues that spoil the user experience.

Product managers can’t afford to ignore these because they result not only in customer churn but also cause irreparable damage to the product and brand reputation.

How to collect feedback on bugs?

To collect bug reports, use a feedback widget like the one you use to collect customer requests or passive feedback.

Bug report widget created in Userpilot
Bug report widget created in Userpilot.

Customer service feedback

Customer service feedback helps teams evaluate how effective the customer support teams are at dealing with customers. It also helps you identify issues with the product and user experience that you need to rectify.

This kind of feedback could include complaints but it’s not limited to negative feedback. Positive feedback helps you identify and promote successful practices.

How to collect customer service feedback?

Customer service feedback normally comes from support teams. These are the people who deal with customers on a daily basis so they are aware of the barriers and problems that they face.

Monitoring and reviewing support tickets also offers valuable insights into the quality of customer service. Using a complaint tracking tool like Zendesk or Freshdesk will help you manage customer complaints efficiently.

Understand customer expectations with sales feedback

Your sales team is another customer-facing team that can provide you with valuable knowledge about customer expectations.

That’s because salespeople tend to be well-trained in questioning techniques and they are great at extracting from customers what they need. They may be able to tell you about the unsolved pain points that customers face or gaps in the market that you could fill.

How to collect sales feedback?

The most obvious way to collect this kind of feedback is from sales calls.

Indirectly, you could also obtain this kind of feedback by monitoring subscription cancellations. An increase in cancellations could indicate that the product stopped delivering the expected value or that there is a more attractive competitor around.

Subscription cancellation feedback

The feedback your customers leave as they’re about to cancel their subscription or downgrade their plan can shed light on how your product fails to satisfy their needs.

It can give you ideas on how you can improve the product and the user experience to reduce churn and give insights into the shifts in the competitive market.

How to collect subscription cancellation feedback?

Churn surveys are a kind of microsurvey that target users who have canceled their paid plan or are about to do this. They can be delivered by email or in-app.

The advantage of the latter is that it also gives you a chance to win the customer back. You can do this by highlighting what the users are going to miss out on when they proceed with the cancellation.

In your survey, include the option to add their own reason for leaving you not just a constrained list. In this way, you increase the chance of unearthing new issues.

Customer churn feedback survey
Customer churn feedback survey.

Close the customer feedback loop after collecting feedback

Collecting feedback only makes sense when you are ready to act on it.

What do we mean by acting on it?

  • Use in-app messages to acknowledge the feedback so that users know you’ve received it and appreciate it.
  • Tag and group similar responses together to quantify them and spot trends.
  • Reach out to your users to gain more detailed insights into their experience.
  • Cross-reference user feedback results with product usage and customer behavior data to identify successful and unsuccessful practices. For example, look at how your power users or most loyal customers engage with the product and where they get value from.
  • Check the requests for alignment with the product vision. Act only on the feedback that will help you realize your product goals. Otherwise, you will spread your resources thin and damage the value proposition.
  • Once you implement changes, use in-app communications to bring your users up-to-date. Don’t forget about the inactive users. Try to reengage them with emails.
  • Start collecting feedback as soon as you implement any changes.
Customer Feedback Loop
Customer feedback loop.

Conclusion

There are quite a few types of customer feedback that product managers can leverage to make informed product decisions.

Its good practice to collect customer feedback from various sources and use it in conjunction with product usage data to better understand customer behavior.

No matter how much feedback you collect, it’s not much use if you don’t close the feedback loop and act on it.

If you want to see how Userpilot can help you collect and analyze various types of customer feedback, book the demo!

The post 13 Types of Customer Feedback: How to Collect Valuable Feedback As a Product Manager appeared first on Thoughts about Product Adoption, User Onboarding and Good UX | Userpilot Blog.

The post 13 Types of Customer Feedback: How to Collect Valuable Feedback As a Product Manager appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/03/01/13-types-of-customer-feedback-how-to-collect-valuable-feedback-as-a-product-manager/feed/ 0
How did I earn money to fix an issue from an Open Source Software (OSS)? https://prodsens.live/2023/02/06/how-did-i-earn-money-to-fix-an-issue-from-an-open-source-software-oss/?utm_source=rss&utm_medium=rss&utm_campaign=how-did-i-earn-money-to-fix-an-issue-from-an-open-source-software-oss https://prodsens.live/2023/02/06/how-did-i-earn-money-to-fix-an-issue-from-an-open-source-software-oss/#respond Mon, 06 Feb 2023 12:05:20 +0000 https://prodsens.live/2023/02/06/how-did-i-earn-money-to-fix-an-issue-from-an-open-source-software-oss/ how-did-i-earn-money-to-fix-an-issue-from-an-open-source-software-(oss)?

Originally published at renanfranca.github.io I am going to share my approach and my own experience and opinion about…

The post How did I earn money to fix an issue from an Open Source Software (OSS)? appeared first on ProdSens.live.

]]>
how-did-i-earn-money-to-fix-an-issue-from-an-open-source-software-(oss)?

Originally published at renanfranca.github.io

I am going to share my approach and my own experience and opinion about fixing a bounty issue from the jhipster-lite Open Source Software (OSS).

Wich issue to choose

I got stuck for a long time because I couldn’t decide which issue I should fix and I didn’t have much time to spend fixing an issue. I could only be coding while my baby girl was sleeping.

At first, I created an issue to fix because I was afraid to fail to fix an existing issue. Then I realized that isn’t a good idea to implement a front-end feature because my strength is back-end code using spring boot!

After some time, I decided to join together my goal to help the jhipster-lite (jhlite) project by solving an issue with my need to earn money beyond my paycheck. So I started to read the bounty issues list, looking for an issue that demands only backend skills to fix.

In the end, I decided to try to fix this issue: Add the possibility to comment spring properties in modules
.

How should I start to fix an issue?

I started to read the closed issues to understand and look for answers to my questions:

  • How an issue was assigned to someone?
  • Should I ask for permission to fix it before I start to try to fix it?
  • What happens if two contributors were trying to fix the same issue and both didn’t ask for permission?

I found all the answers in the bounty issue instructions:

  • Create a Pull Request that fixes a ticket with the $$ bug-bounty $$ label.
  • In order to close the ticket automatically, you must have one commit message with the Fix keyword. For example, Fix #1234 to close ticket #1234.
  • That Pull Request must be merged by someone from the core team. If there are several Pull Requests, the core team member either selects the most recent one or the best one – that’s up to the team member to decide what is best for the project.

In addition, I realized that I was afraid of getting my attempt to fix declined by the reviewer. So I decided to walk over my fear and try to fix and create a pull request as fast as possible to gather feedback (positive or negative) from the reviewer.

What did help me to fix the issue?

Here I will try to share some of my abilities and background knowledge which gives me the confidence to keep up coding until the issue got merged 😄!

Learn about the jhipster-lite project

In my case, I need to understand the project to be motivated to contribute to it. Below, I will list some interesting resources to fall in love with jhlite 😊.

Study Hexagonal Architecture

  • Hexagonal architecture (application service flavor)
  • I cloned the jhipster-lite project and run it in debug mode to learn more about the code base which I am going to change.
  • Then I realized that a better approach is to run the implemented tests in debug mode. The test works as a documentation of the functionalities.

Affinity and joy in implementing Tests

I believe that implementing tests is the way to build great software. The jhipster-lite project is configured to accept only 100% test coverage code. So any change made in this project needs to be tested!

The test code should be as good as the production code, that’s not a trivial task. In my opinion, the jhlite project has a high-quality code base and I had to do my best to implement a feature with the best code I could make.

Know in advance some technologies

  • Maven
  • Java
    • Jacoco
    • Cucumber
    • JUnit
  • Spring Boot
  • VS Code
  • Prettier
  • Sonar
  • Git (especially rebase commands)
  • GitHub (Actions (CI), Fork, Issues, and Create Pull Request)

My workflow

I will share my workflow which I used to fix, improve, and refactor the code each time my pull request was reviewed.

Before pushing my changes to be reviewed

Here is the script I used to keep my branch up to date with the latest version of the jhipster-lite main branch.

git switch main
git fetch upstream
git rebase upstream/main
git push
git switch 
git rebase origin/main
npm run prettier:format

Then I run the sonar after I finished coding.

docker compose -f src/main/docker/sonar.yml up -d
./mvnw clean verify sonar:sonar

So you can check the result at http://localhost:9001 to see if you match the 100% code coverage or if you need to fix any code smells.

Keep the pull request reviewer informed

Every time I pushed some changes I let the reviewer know and I always answers each code review he made.

I preferred to push the changes even though I didn’t fully understand what the review wants because in my experience is easier to talk about an implemented code than a hypothetical one.

There is a Contributor Covenant Code of Conduct

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.

In my own experience as a contributor, I can say that everyone who I interacted with was kind and genuinely wants the best for the project. I felt that they appreciate my effort to fix an issue and they were cheering me up!

Claim the bounty

After my pull request got merged, I was eligible to claim the bounty.

First, I create an invoice at open collective: $300 bug bounty claim for https://github.com/jhipster/jhipster-lite/pull/4814.

Then I published the invoice link at my pull request: https://github.com/jhipster/jhipster-lite/pull/4814#issuecomment-1374433366.

After some time, the invoice was processed and I got paid 👏😃☺!

The official documentation explains each step to claim your bounty: How to get the money.

What was the outcome of this experience?

I am a Windows user, but I am migrating to use Linux (Ubuntu) through wsl 2 because It is faster. On windows took me 5 minutes and 20 seconds to run the command ./mvnw clean verify sonar:sonar. On Linux, the same command took me 2 minutes and 30 seconds. I found this great YouTube video How To Run Linux Code on Windows with WSL 2 & VS Code which taught me how to use Linux for coding inside Windows. In addition, my developer experience was lighter and smoothly with VS Code running under Linux.

I enjoyed working on the jhlite project without being pressured and asynchronously! I could code when I have the time on my own passe 👏!

I realized that I am not an expert in the TDD field, I have so much to learn. Here is an interesting article that opens my eyes The Real Reasons for Doing Test-Driven Development. Thanks to Colin Damon for letting me know that I was doing Test First and not TDD and for introducing the concept of Software Craftsmanship.

Thanks to Pascal Grimaud for being around and following along on the issue.

What next

I am planning to continue to contribute to the jhlite project by fixing issues and keep writing blog posts about my experience with jhipster.

One day, I wish to be part of the jhipster core team 😊!

The post How did I earn money to fix an issue from an Open Source Software (OSS)? appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/02/06/how-did-i-earn-money-to-fix-an-issue-from-an-open-source-software-oss/feed/ 0