Bill Emmett, Author at ProdSens.live https://prodsens.live/author/bill-emmett/ News for Project Managers - PMI Fri, 10 May 2024 11:20:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Bill Emmett, Author at ProdSens.live https://prodsens.live/author/bill-emmett/ 32 32 Best URL Practices for SEO: How to Optimize URLs for Search https://prodsens.live/2024/05/10/best-url-practices-for-seo-how-to-optimize-urls-for-search/?utm_source=rss&utm_medium=rss&utm_campaign=best-url-practices-for-seo-how-to-optimize-urls-for-search https://prodsens.live/2024/05/10/best-url-practices-for-seo-how-to-optimize-urls-for-search/#respond Fri, 10 May 2024 11:20:39 +0000 https://prodsens.live/2024/05/10/best-url-practices-for-seo-how-to-optimize-urls-for-search/ best-url-practices-for-seo:-how-to-optimize-urls-for-search

Back when I started playing the search engine optimization (SEO) game, keyword stuffing actually worked. Since then, algorithm…

The post Best URL Practices for SEO: How to Optimize URLs for Search appeared first on ProdSens.live.

]]>
best-url-practices-for-seo:-how-to-optimize-urls-for-search

Back when I started playing the search engine optimization (SEO) game, keyword stuffing actually worked. Since then, algorithm updates have spared only a few SEO best practices, like URL optimization.

Google’s algorithm updates have shaken up the scenery of SEO. However, URL optimization has stood the test of time. This is an essential element of on-page SEO that every content marketer should know.

To get you up to speed, I’ll share the ABCs of URL optimization and up-to-date best practices. I’ll also share tips I’ve learned from experts.

→ Download Now: SEO Starter Pack [Free Kit]

Table of Contents

The making of an optimized URL.

A typical URL consists of several parts: Protocol, subdomain, domain, subdirectory, and slug. A protocol can be HTTP or HTTPS — the latter signaling an encrypted connection. A subdomain is usually “www.” (World Wide Web), but custom subdomains like “shop.” and “blog.” aren’t uncommon.

Afterward, there’s the domain name, which consists of a top-level domain like “.com” and a second-level domain, which is usually a brand or project name.

The aforementioned parts will help you reach a home page. From there, you’ll likely go to a subdirectory — a folder inside the main website — and a slug, which identifies singular pages.

URL Example

Consider the following URL: https://blog.hubspot.com/marketing/url-best-practices-for-seo. It has the following parts:

  • Protocol: HTTPS://
  • Subdomain: blog.
  • SLD: hubspot
  • TLD: .com
  • Subdirectory: /marketing
  • Slug: /url-best-practices-for-seo

From this URL, I can tell that I’m on a blog about URL basics. It’s published by HubSpot and hosted using an encrypted connection.

These perks are only possible if we use SEO best practices for URLs. Let’s dig further into the reasons why good URLs are so impactful.

Ranking Factor

URLs are on Google’s confirmed search ranking factors, which help search engines decipher what each webpage contains. With that in mind, I add relevant keywords to my URLs to help Google understand my pages’ content, why I made it, and which searches it should rank for.

User Experience

Keyword stuffing is a thing of the past. SEO is now a delicate dance of pleasing both algorithms and flesh-and-blood readers. I now aim for a descriptive URL so users know what to expect from the page.

If someone sends my URL in a direct message, will the recipient feel confident they’re clicking into a relevant and valuable page? Reaching this ideal gets me more backlinks and sales.

If there’s one thing I hate about LinkedIn, it’s how it handles external URLs. You can’t hyperlink an anchor text. Instead, you must add a bare link, which often appears ugly if you use non-descriptive links — like https://www.example.com/post/p123/.

How external links look on LinkedIn.

Image Source

While I could use a link shortener to make it prettier, that’s an extra step. I avoid the issue altogether by using descriptive, well-formatted, and concise URLs.

We’ve talked about the ideals, so let’s go through the SEO best practices that will get you there.

Crafting the perfect URL is only one part of ranking in search. Looking to learn more? Check out our complete guide on-page SEO.

1. Keep each URL as simple as possible.

SEO is a Rubik’s Cube on steroids — complex and constantly shifting. Sometimes, my saving grace is Google’s guidelines on URL optimization. We’re told to “create a simple URL structure” and use “simple, descriptive words in the URL.”

While “simple” varies from person to person, opt for one of the following good URL structures, depending on the business.

  • Content website: https://example.com/category/post-title
  • Ecommerce website: https://example.com/product-category/product-name
  • Service-based website: https://example.com/service-category/service-name
  • Local business website: https://example.com/location/service
  • Portfolio website: https://example.com/portfolio/project-name

2. Standardize your URL naming conventions.

While I recommend using one of the variations shared above, I occasionally brainstorm among my team members to see what works for us. As long as we have a standard and stick to it, we’re good.

“URLs are a stable foundation. Once set, changing them can cause more harm than good, leading to broken links and lost SEO juice — unless properly managed with redirects,” shares Ryan Ratkowski, founder of Cascade Interactive.

I think of it like a building’s plumbing system. I’d focus on getting the configuration right during setup rather than ripping out the walls five years in. Incorporate SEO best practices for URLs during the initial build of your website.

3. Limit the URL structure to three hierarchical levels.

The first time I set up a URL structure, I debated diving deep into subfolders and subcategories for everything and anything. My more experienced stance is to keep it simple and keep it logical.

Jacob Kettner, CEO of First Rank, recommends “a maximum of three hierarchical levels, ensuring clarity without unnecessary complexity.”

Why? “It strikes the perfect balance, offering categorization without overwhelming users,” he adds.

4. Avoid adding dates.

I think twice before slamming time stamps onto my URLs. It’s like adding an expiration date to my webpage. In contrast, users (and Google) prefer new content. Keep your URLs timeless, just like a classic black tee.

Maddy Osman, founder of The Blogsmith, agrees and adds: “In most cases, articles take anywhere from three to six months to appear in the top 10 on SERPs. You don’t want to restrict the potential of that ranking article by including the previous year in the URL slug.”

I use WordPress, so I head to “Settings” > “Permalinks” to make sure I haven’t enabled a permalink structure involving time information.

wp

5. Take out non-essential words in the slug.

Pop quiz. Which should you use?

  • /how-to-optimize-your-urls-for-search-quick-tip
  • /how-to-optimize-urls-for-search

Writers and editors often ask me about this. Personally, I remove words that add little or no meaning to the URL — like “a,” “and,” and “that.” The latter URL without “your” and “quick-tip” conveys the same meaning without looking like a word soup, so I prefer that variation.

Plus, a 2023 Backlinko study found that shorter URLs tend to rank above longer URLs, so I use a limit of 60-70 characters to avoid long URLs.

To get an even shorter URL, I could also remove the words “to” and “for,” but I consider them better to keep since they make the URL more readable for humans. It’s a balance that comes intuitively, but I know you’ll get it with practice.

6. Handle dynamically generated URLs with care.

While a static URL remains consistent every time it’s accessed, I’ve run into website builders that automatically generate dynamic URL parameters when the webpage is loaded.

In such cases, I don’t have the complete customizability to change the URL, so I have to make do with URLs containing random symbols and numbers. Working with that can be a challenge, but I don’t lose sleep over it.

“As long as you‘re aware of your website’s limitations and can optimize the URL slugs you do have control over, you shouldn’t have to worry about parameterized URLs negatively impacting your SEO performance,” Lauren Galvez, an experienced SEO consultant, assured me.

7. Include relevant keywords.

Since the URL tells search crawlers what the webpage is about, I recommend including relevant keywords to instantly convey everything there is to your webpage.

This also improves my click-through rate (CTR) on the page. A 2023 Backlinko study found that webpages with URLs similar to search keywords enjoyed a higher CTR compared to webpages with URLs different from search keywords.

For instance, if users search for “ergonomic keyboards,” I opt for an SEO-friendly URL slug that contains “ergonomic keyboards” instead of “flexible keyboards.”

8. No keyword stuffing.

SEO URL best practices include avoiding keyword stuffing in URLs.

When an article is relevant to multiple main keywords, I don’t include all of them in the URL. Otherwise, I’m left with a mess like this:

https://blog.hubspot.com/marketing/url-best-practices-for-seo-friendly-structure-optimization.html

Wow, that looks ugly. Plus, it would take readers a few tedious seconds to understand what the webpage is about. In contrast, SEO best practices for URLs prioritize usability over almost everything else.

What I do is pick a single keyword for my URL and let my content do the talking.

9. Make it reader-friendly.

While I’ve mentioned it before, it’s worth reiterating that URLs should be self-explanatory to internet users. People should be able to instantly tell what they might find based on your slug.

With that in mind, sometimes I have to reorder my keywords. Other times, I have to omit words or add stop words. For instance, the URL slug “/google-algorithm-update-names” may be a mouthful for readers, so I’d change it to “/names-of-google-algorithm-updates.”

10. Separate words with hyphens.

URLs cannot contain spaces. So, to ensure I don’t end up with slugs like “/googlealogrithmupdatenames,” I use a separator. Google recommends that we use hyphens (-) instead of underscores (_).

11. Use lowercase letters.

While I’m all about proper capitalization (even in text messages), I have to accept the triumph of lowercase letters in URLs. For starters, it keeps things consistent. Plus, it avoids compatibility hiccups with any case-sensitive web server since a user might enter a URL with lowercase instead of uppercase letters.

12. Don’t use slugs that belong to other pages.

URLs need a unique slug — or Google sees the pages as duplicate content on your website, which can be penalized in some circumstances. To prevent my web pages from competing with each other in search engine results, I avoid similar URL slugs entirely.

That being said, especially if you own an ecommerce store, you might realize you have two similar URLs like this:

  • https://www.example.com/product-category-one
  • https://www.example.com/product-category-directory/product-category-one

When that happens to me, I tell search crawlers which webpage I want to appear on Google Search. More specifically, I use canonical tags, a classic technical SEO practice.

SEO URL best practices include using canonical URLs to resolve duplicate content issues.

Making the Most of URLs

When I’m scrolling on my phone, I’m not analyzing the URLs I click on. I just tap away. However, on the back end, great URLs lead to more traffic. While there’s a laundry list of tips to keep in mind, these best practices become second nature over time.

When you want to dig deeper into SEO essentials, such as link building, check out our SEO guide with all the juicy details.

marketing

Editor’s note: This post was originally published in April 2014 and has been updated for comprehensiveness.

The post Best URL Practices for SEO: How to Optimize URLs for Search appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/05/10/best-url-practices-for-seo-how-to-optimize-urls-for-search/feed/ 0
OpenPose ControlNet: A Beginner’s Guide https://prodsens.live/2023/12/14/openpose-controlnet-a-beginners-guide/?utm_source=rss&utm_medium=rss&utm_campaign=openpose-controlnet-a-beginners-guide https://prodsens.live/2023/12/14/openpose-controlnet-a-beginners-guide/#respond Thu, 14 Dec 2023 08:25:19 +0000 https://prodsens.live/2023/12/14/openpose-controlnet-a-beginners-guide/ openpose-controlnet:-a-beginner’s-guide

What is OpenPose ControlNet and how does it work? OpenPose ControlNet may seem intimidating to beginners, but it’s…

The post OpenPose ControlNet: A Beginner’s Guide appeared first on ProdSens.live.

]]>
openpose-controlnet:-a-beginner’s-guide

What is OpenPose ControlNet and how does it work?

OpenPose ControlNet may seem intimidating to beginners, but it’s an incredibly powerful AI tool. It allows users to control and manipulate human body parts in real-time videos and images. In this blog post, we will take a closer look at OpenPose ControlNet, from understanding its core concepts to exploring its practical applications in the field of AI. We will also guide you through the installation process and delve into the settings of ControlNet. In addition, we will explore how to choose the right model for your needs, examine the role of Tile Resample, and learn how to copy a face with ControlNet using the IP-Adapter Plus Face Model. Lastly, we will discuss innovative ideas for using ControlNet in various fields and uncover how the interaction between Stable Diffusion Depth Model and ControlNet enhances performance. By the end of this beginner’s guide, you’ll be able to utilize OpenPose ControlNet like a pro!

Understanding ControlNet in OpenPose

ControlNet in OpenPose provides advanced control over the generation of human poses with stable diffusion and conditional control based on reference image details. The control map guides the stable diffusion of generated human poses, and the OpenPose editor facilitates the controlnet settings for stable pose details diffusion.

The Core Concept of ControlNet

The ControlNet extension in the OpenPose model facilitates detailed control over facial features and expressions. It incorporates neural network models for stable diffusion of human pose details, crucial for precise control of head and eye positions. The control map ensures stable diffusion of human pose from the input image.

Practical Applications of ControlNet in OpenPose

Practical applications of ControlNet in OpenPose encompass various use cases, such as animation, workflow, and functionalities. Its stable diffusion model benefits detailed face and facial control in diverse human subjects, enabling the stable diffusion of human pose details in the input image.
Image description

Getting Started with Stable Diffusion ControlNet

A crucial step for achieving stable diffusion controlnet settings is the installation of the controlnet extension in Google Colab. Whether on a Windows PC or Mac, installing controlnet is vital for stable diffusion of human pose details. Additionally, updating the controlnet extension is necessary to maintain stability and achieve the desired results in OpenPose model. To install the v1.1 controlnet extension, go to the “extensions” tab and install it from this URL: https://github.com/Mikubill/sd-webui-controlnet. If you already have v1 controlnets installed, delete the folder from stable-diffusion-webui/extensions/. Install the v1.

Steps to Install ControlNet in Google Colab

The installation process of the controlnet extension involves a reference image, negative prompt, and stable diffusion model. Installing controlnet in Google Colab requires the input image, hard time, and final image details. This process also involves the neural network structure, base model, and controlnet openpose model, which lead to stable diffusion of human pose details in the generated image.

Click the Play button to start AUTOMATIC1111.
Image description

Procedure for Installing ControlNet on Windows PC or Mac

The process of setting up ControlNet on a Windows PC or Mac involves integrating openpose face and neural network details for stable diffusion of human pose data. This includes employing reference images, negative prompts, and controlnet settings to govern key points’ positions.

  1. Navigate to the Extensions page.

  2. Select the Install from URL tab.

  3. Put the following URL in the URL for extension’s repository field.

https://github.com/Mikubill/sd-webui-controlnet

  1. Click the Install button.

  2. Restart AUTOMATIC1111.

How to Update the ControlNet Extension

Updating the ControlNet extension involves adjusting the control map, aspect ratio, and QR code settings, as well as personal devices. This update includes neural network improvements and control map settings for stable diffusion. The control settings dictate head, eye, and facial feature positions.

  1. Navigate to the Extensions page.
  2. In the Installed tab, click Check for updates.
  3. Wait for the confirmation message.
  4. Restart AUTOMATIC1111 Web-UI.
    Image description

Diving into ControlNet Settings

Exploring the text-to-image settings, including the text prompt, in ControlNet extension is crucial for stable diffusion. ControlNet settings enable stable diffusion of human pose details, involving detailed face and facial control, stable diffusion, and control map settings. They allow control over head, eyes, and facial details positions in the input image, essential for stable diffusion in the generated image.
Image description

An Overview of Text-to-Image Settings

Controlnet settings in the openpose model enable precise control over the positions of facial details, head, and eyes in input images. The text-to-image settings also facilitate stable diffusion of human pose details through the control map.

Exploring ControlNet Settings in Depth

Exploring the intricate aspects of controlnet settings encompasses the control map, aspect ratio, qr code, and personal devices, guiding the stable diffusion of human pose details. This involves the neural network, image generation, reference image, negative prompt, and key points. Controlnet settings regulate the positions of facial details, enabling stable diffusion.
Image description

A Look at Preprocessors and Models in OpenPose

Preprocessors in OpenPose enable image diffusion while OpenPose models use neural network structure. Different preprocessor functionalities adapt to various use cases, and models in OpenPose ControlNet extension control the openpose model. The relationship between preprocessor workflow functionalities is crucial.
Image description

How to Choose the Right Model for Your Needs

When selecting a model, consider your specific use case. Take into account the controlnet settings for OpenPose model selection. The base model significantly impacts the final image generation. Additionally, the controlnet model reference image plays a crucial role in model selection. Neural network model controlnet openpose editor settings are also essential.
Image description

Delving into OpenPose and Its Features

To begin using ControlNet, the first step is to select a preprocessor. Enabling the preview feature can be beneficial as it allows you to observe the transformations applied by the preprocessor. Once the preprocessing is complete, the original image is no longer utilized, and only the preprocessed image is retained for further use with ControlNet.
Image description

Understanding the Role of Tile Resample

Tile resample alters original image pixel positions, crucial for face skeleton generation. Its qr code generation and personal device compatibility are vital. The base model settings control image pixel density, ensuring optimal performance.
Image description

The Art of Copying a Face with ControlNet

ControlNet’s diffusion model ensures stable image generation, impacted by facial details’ control map settings. Replicating a face with ControlNet openpose demands precise facial and eye positions. This underlines the importance of understanding ControlNet’s intricate features.

How to make AI Faces. ControlNet Faces Tutorial.
Image description

Installation Guide for the IP-Adapter Plus Face Model

Installation of the IP-Adapter Plus Face Model offers user-friendly settings and an easy-to-manage download folder structure. The gpu checkpoint settings enhance performance, whereas the dslr animation settings provide high-quality image diffusion. Additionally, anime image generation showcases an interesting use case.

Utilizing the IP-Adapter Plus Face Model Effectively

When using the IP-Adapter Plus Face Model, stable diffusion model control is assured. Its workflow functionalities suit various use cases and enhance image diffusion effectively. Dataset browser functionalities simplify image generation, while controlnet settings govern the final image quality.
Image description

Unveiling the Magic of Multiple ControlNets

Multiple ControlNets enhance image generation with conditional control over diffusion. The interaction of stable diffusion model and ControlNets enhances control map generation, providing detailed head positions and flexibility in image creation.

Innovative Ideas for Using ControlNet in Various Fields

ControlNet openpose model offers stable diffusion for human subjects image generation. The extension settings facilitate image generation for different use cases, offering unique possibilities in personal devices image generation. ControlNet settings for neural network structure enhance image diffusion control and open up new image generation opportunities.

How Does the Interaction between Stable Diffusion Depth Model and ControlNet Enhance Performance?

The interaction between the stable diffusion depth model and ControlNet in OpenPose enhances performance by improving control map generation for image generation. The advanced capabilities of the stable diffusion model and ControlNet open up new possibilities for image generation. Neural network structure settings in ControlNet influence the performance of the stable diffusion model. Additionally, input image pose control with the stable diffusion depth model impacts the final generated image. The hard time ControlNet settings further enhance image diffusion control and keypoints.

Conclusion

In conclusion, ControlNet in OpenPose is a powerful tool that allows for precise control and manipulation of various parameters in image generation. Whether you’re a beginner or an experienced user, understanding ControlNet and its applications can greatly enhance your experience with OpenPose. By following the installation and setup instructions, exploring the different settings, and utilizing the available models, you can unleash your creativity and achieve incredible results. From copying faces to exploring innovative ideas in different fields, the possibilities are endless. The interaction between the Stable Diffusion Depth Model and ControlNet further enhances performance and opens up new avenues for experimentation. So, dive into the world of ControlNet and see what amazing creations you can achieve. Happy exploring!

Originally published at novita.ai
novita.ai provides Stable Diffusion API and hundreds of fast and cheapest AI image generation APIs for 10,000 models.🎯 Fastest generation in just 2s, Pay-As-You-Go, a minimum of $0.0015 for each standard image, you can add your own models and avoid GPU maintenance. Free to share open-source extensions.

The post OpenPose ControlNet: A Beginner’s Guide appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/14/openpose-controlnet-a-beginners-guide/feed/ 0
Make an Animated Menu like Stripe with React, Tailwind, and AI https://prodsens.live/2023/11/03/make-an-animated-menu-like-stripe-with-react-tailwind-and-ai/?utm_source=rss&utm_medium=rss&utm_campaign=make-an-animated-menu-like-stripe-with-react-tailwind-and-ai https://prodsens.live/2023/11/03/make-an-animated-menu-like-stripe-with-react-tailwind-and-ai/#respond Fri, 03 Nov 2023 17:25:42 +0000 https://prodsens.live/2023/11/03/make-an-animated-menu-like-stripe-with-react-tailwind-and-ai/ make-an-animated-menu-like-stripe-with-react,-tailwind,-and-ai

Written by Steve Sewell. How does Stripe make this awesome morphing menu animation? Let’s recreate this in React…

The post Make an Animated Menu like Stripe with React, Tailwind, and AI appeared first on ProdSens.live.

]]>
make-an-animated-menu-like-stripe-with-react,-tailwind,-and-ai

Written by Steve Sewell.

How does Stripe make this awesome morphing menu animation?

Let’s recreate this in React with just a few lines of logic:

“`ts
const [hovering, setHovering] = useState(null);
const [popoverLeft, setPopoverLeft] = useState(null);
const [popoverHeight, setPopoverHeight] = useState(null);

const refs = useRef<(HTMLElement | null)[]>([]);

const onMouseEnter = (index: number, el: HTMLElement) => {
setHovering(index);
setPopoverLeft(el.offsetLeft);
const menuElement = refs.current[index];
if (menuElement) {
setPopoverHeight(menuElement.offsetHeight);
}
};
“`

We’ll also use AI and Tailwind to create the markup, to quickly go from a basic hello world app to this as our end result:

Generate the markup

Let’s start with a blank React app. You can use Next.js, Remix, or even be a cool kid and use Qwik if you like. Here is where I started:

export default function Home() {
  return <h1>Hello world</h1>;
}

Screenshot of a "hello world" in a browser window

Stunning!

But this is pretty far from looking like what we want, and hand coding an entire Stripe site will take a lot of time.

But thankfully, we have AI to get us 80% of the way there without all that work.

I started with these mockups in Figma, and used the Builder.io plugin to convert them to React + Tailwind code using Visual Copilot.

By just clicking Generate , we get launched into Builder.io, and we can copy the code and paste it into our codebase.

I put it into a new component that I named StripeHero:

export function StripeHero() {
  return /* markup generated by Builder.io */
}

I then import that into my page:

import { StripeHero } from "@/components/StripeHero";

export default function Home() {
  return <StripeHero />;
}

And we get this:

Screenshot of our new page that looks almost identical to Stripes homepage

Much better!

Now we want to extract the markup for the navigation links into their own component so we can add our logic to them.

From our StripeHero component, I cut this section out and brought it to a new Nav component:

export function Nav() {
  return (
    <nav className="items-start self-center flex w-[486px] max-w-full justify-between gap-5 my-auto max-md:flex-wrap max-md:justify-center">
      <a
        href="/products"
        className="text-white text-center text-base font-medium leading-6 tracking-wide self-stretch"
      >
        Products
      </a>
      <a
        href="/solutions"
        className="text-white text-center text-base font-medium leading-6 tracking-wide self-stretch"
      >
        Solutions
      </a>
      ... the other links
    </nav>
  );
}

And then back in our StripeHero, we will reference this component instead:

import { Nav } from './Nav'

export function StripeHero() {
  return (
    <>
      ...
      <Nav />
      ...
    </>
  )
}

Now that we didn’t have to waste time generating all of that markup and styling, let’s plug in our logic and make those nice interactions.

Adding the logic

Going back at our animation we want to copy again:

There are only three things we need to track:

  1. We need to know which link we’re hovering over. We’ll store that as a number.
  2. We need to know the left offset that the menu should have, so that it positions itself under the relevant link.
  3. We need to know the height of the current nav section to show, so we can resize the popover height to match it.

So back in our nav component, let’s add these.

export function Nav() {
  const [hovering, setHovering] = useState<number | null>(null);
  const [popoverLeft, setPopoverLeft] = useState<number | null>(null);
  const [popoverHeight, setPopoverHeight] = useState<number | null>(null);

  return /* the markup generated by Builder.io */
}

Note: There are better ways for managing state than repeated useState hooks like this, such as using reducers, libraries, or custom hooks, but to keep things simple for learning purposes, we’ll stick to useState today.

Now, let’s start plugging these in.

On mouse enter of each link, we will set the hovering to the correct index, and when we leave the nav entirely, we will set hovering back to null.

Additionally, when we are hovering over a link, we’ll have a popover show.

export function Nav() {
  // ...
  return (
    <nav onMouseEnter={() => setHovering(null)}>
      <a onMouseEnter={() => setHovering(0)} ...>
        Products
      </a>
      <a onMouseEnter={() => setHovering(1)} ...>
        Solutions
      </a>
      ...
      {typeof hovering === 'number' && (
        <div className="absolute shadow bg-white p-5 rounded w-[600px] ...">
          {/* Our popover */}
        </div>
      )}
    </nav>
  )
}

Now we’ve got a basic popover when we hover links.

Now, let’s make it so that when I hover different links, the popover follows us to which link our mouse is over.

To do that, we’ll need to set the popoverLeft value. We can add that to our onMouseEnter callbacks by setting popoverLeft to the event.currentTarget.clientLeft. We can do it for each link like so:

export function Nav() {
  // ...
  const [popoverLeft, setPopoverLeft] = useState<number | null>(null);

  return (
    // ...
      <a 
        onMouseEnter={(event) => {
          setHovering(0);
          setPopoverLeft(event.currentTarget.clientLeft);
        }}
        ...
        >
        Products
      </a>
      <a 
        onMouseEnter={(event) => {
          setHovering(1);
          setPopoverLeft(event.currentTarget.clientLeft);
        }}
        ...
        >
        Solutions
      </a>
    // ...
  )
}

Stay with me — we’ll refactor this logic later to be less redundant.

Then, we just need to set the left value to the popover itself. We’ll also add a transition-all class so that our popover animates when it moves.

export function Nav() {
  // ...
      {typeof hovering === 'number' && (
        <div 
          style={{
            left: popoverLeft ?? 0
          }}
          className="transition-all ...">
          {/* ... */}
        </div>
      )}
    // ...
}

Now it gives us a popover that follows you as expected:

Now, the next thing we need is something to live inside of our popover.

I’m going to go back to our Figma design and use the Builder.io figma plugin to convert each of the popover menus to React + Tailwind code.

Now in VS Code I made components for each Menu and pasted the code in there.

One thing we’ll need to do shortly is to measure the client height of each of these menus. In order to do that, we’ll need access to the underlying DOM node.

In order to provide access to the inner DOM node to a parent component, we’ll wrap each of these components in react’s forwardRef to make this easy.

import { forwardRef } from 'react';

export const Menu3 = forwardRef<HTMLElement>(props, ref) => {
  return (
    <section ref={ref} ...>
      {/* Markup generated by Builder.io */}
    </section>
  )
})

Now, back in our Nav component we can plug the menus into our popover based on which is being hovered like so:

export function Nav() {
  // ...
      {typeof hovering === 'number' && (
        <div ...>
          {hovering === 0 ? (
            <Menu0 />
          ) : hovering === 1 ? (
            <Menu1 />
          ) : hovering === 2 ? (
            <Menu2 />
          ) : hovering === 3 ? (
            <Menu3 />
          ) : null}
        </div>
      )}
    // ...
}

The trick here is we actually need to render all of the menus at once, so we can fade each in and out individually without them popping in and out of the DOM.

So now our Nav component will look like this instead:

Now, this isn’t the prettiest way to show a different view by index, but we will refactor this later.

Also, speaking of keeping your code clean, there is one other ugly thing we do here that we should generally do better.

In reality, you’d generally want to give each of those menus a more descriptive name, like ProductsMenu and SolutionsMenu, but I got lazy and forgot to rename this before building out this example so bear with me and name your own components better. 😄

Now that things are functional, we just need to add the nice animations.

export function Nav() {
  // ...
      {typeof hovering === 'number' && (
        <div ...>
          <Menu0 />
          <Menu1 />
          <Menu2 />
          <Menu3 />
        </div>
      )}
    // ...
}

But of course, we need to overlay each menu so we can fade one into the next. To do so, we’ll make each absolute:

export function Nav() {
  // ...
      {typeof hovering === 'number' && (
        <div ...>
          <div className="absolute">
            <Menu0 />
          </div>
          <div className="absolute">
            <Menu1 />
          </div>
          ...
        </div>
      )}
    // ...
}

Now that they all overlap, we just need to animate the active one.

I’m going to use a handy called clsx that will make it a lot easier to add dynamic classes in React (and the library < 300 bytes).

We’ll use clsx to make the non-active menus opacity-0 and pointer-events-none to make sure they aren’t clickable. We will also transition the opacity (and soon transform) via transition-all.

export function Nav() {
  // ...

      {typeof hovering === "number" && (
        <div ...>
          <div className={clsx(
            "absolute transition-all",
            hovering === 0 ? "opacity-100" : "opacity-0 pointer-events-none"
          )}>
            <Menu0 />
          </div>
          <div className={clsx(
            "absolute transition-all",
            hovering === 1 ? "opacity-100" : "opacity-0 pointer-events-none"
          )}>
            <Menu1 />
          </div>
        </div>
      )}
    // ...
}

Now in our React app, we’ve got our transition, but something is clearly wrong here:

Oh ya! We need to hook up that popoverHeight state we defined earlier. Otherwise, given our inner contents are all position absolute, they break out of the typical document flow and don’t push the popover height to be their height automatically (like they would without position: absolute being set).

So now, back in our nav component, we need to set the popover height.

This is a good time to recognize that our mouseenter listeners are quite redundant, so let’s start by refactoring those.

Let’s create a new onMouseEnter function that encapsulates our current logic, like so:

export function Nav() {
  // ...
  const onMouseEnter = (index: number, el: HTMLElement) => {
    setHovering(index);
    setPopoverLeft(el.offsetLeft);
    // We will add the popover height logic here shortly:
    // setPopoverHeight(...)
  };

  // ...
  <a onMouseEnter={(event) => onMouseEnter(0, event.currentTarget)} ...>
    Products
  </a>
  <a onMouseEnter={(event) => onMouseEnter(1, event.currentTarget)} ...>
    Solutions
  </a>
  // etc...
}

Here, we can take the index and anchor element as arguments, and better encapsulate our logic.

Now, we can add our popoverHeight logic directly in here.

But first, we need a reference to the element wrapping the inner contents for each of our menus.

We will use the useRef hook, but because we have a list of menu items, we’ll keep a list of references like so:

export function Nav() {
  // ...
  const refs = useRef<(HTMLElement | null)[]>([]);

  // ...
  <Menu0 ref={element => refs.current[0] = element} />
  // ...
  <Menu1 ref={element => refs.current[1] = element} />
  // etc...
}

We can use ref on each of the Menu components because of the forwardRef we applied to each earlier.

Now that we have refs to all of the root elements of each menu, we can add our height logic to our new onMouseEnter function and set the value as the height style property for our popover.

export function Nav() {
  // ...
  const onMouseEnter = (index: number, el: HTMLElement) => {
    // ...
    const menuElement = refs.current[index];
    if (menuElement) {
      setPopoverHeight(menuElement.offsetHeight);
    }
  };
  // ...

  <div style={{
    left: popoverLeft ?? 0,
    height: popoverHeight ?? 0
  >
  // ...
}

Now things are getting pretty good! We have the popover left and height transitioning, and each menu item fading in.

Now there is one last piece, which is the best part: fading the menus left and right to make them appear to morph from one to the next.

To do this, we need to add some logic to each menu item to transform the menus to the left if it is before the active menu, center if it is active, and right if it is after the active menu.

But first, let’s clean up our code a bit.

I’m going to bring the per-menu animation logic to be its own wrapper components where we will pass the menu’s index, hovering index, and children as props:

import clsx from "clsx";

export function SlideWrapper(props: {
  index: number;
  hovering: number | null;
  children: React.ReactNode;
}) {
  return (
    <div
      className={clsx(
        "absolute w-full transition-all duration-300",
        props.hovering === props.index ? "opacity-100" : "opacity-0 pointer-events-none"
      )}
    >
      {props.children}
    </div>
  );
}

Now I can go back and apply this to my Nav component, wrapping each menu, which looks much nicer:

import { SlideWrapper } from './components'

export function Nav() {
  // ...
  <SlideWrapper index={0} hovering={hovering}>
    <Menu0 ref={(ref) => (refs.current[0] = ref)} />
  </SlideWrapper>
  <SlideWrapper index={1} hovering={hovering}>
    <Menu1 ref={(ref) => (refs.current[1] = ref)} />
  </SlideWrapper>
  <SlideWrapper index={2} hovering={hovering}>
    <Menu2 ref={(ref) => (refs.current[2] = ref)} />
  </SlideWrapper>
  <SlideWrapper index={3} hovering={hovering}>
    <Menu3 ref={(ref) => (refs.current[3] = ref)} />
  </SlideWrapper>
  // ...
}

Now, back in our SlideWrapper component, we can add the logic we described, to change the transform based on if this menu is before, after, or equal to the current hovering index:

export function SlideWrapper(props: {
  index: number;
  hovering: number | null;
  children: React.ReactNode;
}) {
  return (
    <div
      className={clsx(
        "absolute w-full transition-all duration-300",
        props.hovering === props.index ? "opacity-100" : "opacity-0 pointer-events-none",
        props.hovering === props.index || props.hovering === null
          ? "transform-none"
          : props.hovering! > props.index
          ? "-translate-x-24"
          : "translate-x-24",
      )}
    >
      {props.children}
    </div>
  );
}

Now we’ve got this amazing end result!

Conclusion

This technique you can now use for any type of menu, with any size of inner contents.

As a bonus, the actual Stripe menus have a nice floating arrow that points to the current link, and can even resize to hold menus of different widths. If you’d like a challenge, try adding that logic as well. It’s not too different than what we have, and adds very nice touches.

As a recap, here are the resources we used so you can build this for yourself too:

Introducing Visual Copilot: a new AI model to convert Figma designs to high quality code in a click.

No setup needed. 100% free. Supports all popular frameworks.

Try Visual Copilot

Read the full post on the Builder.io blog

The post Make an Animated Menu like Stripe with React, Tailwind, and AI appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/03/make-an-animated-menu-like-stripe-with-react-tailwind-and-ai/feed/ 0
Discussion of the Week – v7 https://prodsens.live/2023/10/19/discussion-of-the-week-v7/?utm_source=rss&utm_medium=rss&utm_campaign=discussion-of-the-week-v7 https://prodsens.live/2023/10/19/discussion-of-the-week-v7/#respond Thu, 19 Oct 2023 21:25:11 +0000 https://prodsens.live/2023/10/19/discussion-of-the-week-v7/ discussion-of-the-week-–-v7

In this weekly roundup, we highlight what we believe to be the most thoughtful, helpful, and/or interesting discussion…

The post Discussion of the Week – v7 appeared first on ProdSens.live.

]]>
discussion-of-the-week-–-v7

In this weekly roundup, we highlight what we believe to be the most thoughtful, helpful, and/or interesting discussion over the past week! Though we are strong believers in healthy and respectful debate, we typically try to choose discussions that are positive in nature and avoid those that are overly contentious.

Any folks whose articles we feature here will be rewarded with our Discussion of the Week badge. ✨

The Discussion of the Week badge. It includes a roll of thread inside a speech bubble. The thread is a reference to comment threads.

Now that y’all understand the flow, let’s go! 🏃💨

The Discussion of the Week

Give it up for Jordan (@jordantylerburchett) for getting folks talking about their fave OSes with “What is your favorite operating system?“:

Classic question! Hey, oftentimes, it’s the straightforward discussion topics like this one that really gets folks talking.

Take a look through the comments section and you’ll see a whole plethora of operating systems being shouted out: MacOS Snow Leopard, Arch Linux, Alpine Linux, Ubuntu, openSUSE, Windows 10, Windows XP, AmigaOS, TempleOS… the list goes on and on. But don’t just listen to me list off the OSes — where’s the fun in that? Ya gotta hop into the post and check out the comments section to hear folks’ reasonings and preferences for different situations.

Also, since Jordan was being humble and only mentioned the OS they created once in the comments, I figured I’d toot the horn for them and point y’all to RefreshOS. Gotta respect an OS creator doing their community research! ✊

What are your picks?

The DEV Community is particularly special because of the kind, thoughtful, helpful, and entertaining discussions happening between community members. As such, we want to encourage folks to participate in discussions and reward those who are initiating or taking part in conversations across the community. After all, a community is made possible by the people interacting inside it.

There are loads of great discussions floating about in this community. This is just the one we chose to highlight. 🙂

I urge you all to share your favorite discussion of the past week below in the comments. And if you’re up for it, give the author an @mention — it’ll probably make ’em feel good. 💚

The post Discussion of the Week – v7 appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/10/19/discussion-of-the-week-v7/feed/ 0
AWS open source newsletter, #172 https://prodsens.live/2023/09/04/aws-open-source-newsletter-172/?utm_source=rss&utm_medium=rss&utm_campaign=aws-open-source-newsletter-172 https://prodsens.live/2023/09/04/aws-open-source-newsletter-172/#respond Mon, 04 Sep 2023 09:24:23 +0000 https://prodsens.live/2023/09/04/aws-open-source-newsletter-172/ aws-open-source-newsletter,-#172

September 4th, 2023 – Instalment #172 Welcome to #172 of the AWS open source newsletter, your reliable source…

The post AWS open source newsletter, #172 appeared first on ProdSens.live.

]]>
aws-open-source-newsletter,-#172

September 4th, 2023 – Instalment #172

Welcome to #172 of the AWS open source newsletter, your reliable source for all open source on AWS goodness. What do we have for you this week? Well, more new projects to check out, and plenty of fresh content on the open source projects you all love.

We have tools to help you export your DynamoDB tables as csv files, a tool that goes beyond tracking cost and actually shuts down resources to help you manage your AWS budget, a cool dashboard to help you stay on top of your EC2 configurations, a couple of useful utilities to simplify working with files on Amazon S3, and then a sample Cedar project that helps you implement a Lambda authoriser.

Also featured in this edition is content covering open source technologies including AWS SAM, cedarpy, Cedar, AWS Lambda SnapStart, GraalVM, OpenSearch, Lustre, Kubernetes, MariaDB, MySQL, PostgreSQL, AWS Amplify, GitLab, Next.js, AWS ParallelCluster, Apache Spark, Amazon EMR, Apache Flink, Apache Airflow, Kyverno, CDK8s, sudo-rs, and Sphinx.

Also, be sure to check out the events section as there are a few events happening this week.

Feedback

Before you dive in however, I need your help! Please please please take 1 minute to complete this short survey and you will forever have my gratitude!

Celebrating open source contributors

The articles and projects shared in this newsletter are only possible thanks to the many contributors in open source. I would like to shout out and thank those folks who really do power open source and enable us all to learn and build on top of what they have created.

So thank you to the following open source heroes: Iliyas Maner, Sonia García-Ruiz, k.goto, Stephen Kuenzli, Rehan van der Merwe, Josh Aas, Julian Michel, Olawale Olaleye, Shuting Zhao, Abhishek Gupta, Suman Debnath, Elliott Cordo, Channy Yun, Le Clue Lubbe, Munish Dabra, Lucas Vieira Souza da Silva, Rajiv Upadhyay, Abdallah Shaban, Srinivas Jasti, Sheetal Joshi, Raj Ramasubbu, Brandon Carroll, Stephen Kuenzli, Vadym Kazulkin, and Brad Knowles

Latest open source projects

The great thing about open source projects is that you can review the source code. If you like the look of these projects, make sure you that take a look at the code, and if it is useful to you, get in touch with the maintainer to provide feedback, suggestions or even submit a contribution. The projects mentioned here do not represent any formal recommendation or endorsement, I am just sharing for greater awareness as I think they look useful and interesting!

Tools

dynamodump

dynamodump is a new tool from Iliyas Maner that provides a simple way to dump your AWS DynamoDB table contents to a comma separated value file.

cdk-cost-limit

cdk-cost-limit is a Collection of CDK Constructs to deploy Cost-Aware Self-Limiting Resources. This package lets you set spending limits on AWS. While existing AWS solutions merely alert, this library disables resources, using non-destructive operations, when budgets are hit. This library includes an Aspect and a collection of AWS CDK Level-2 Constructs. They deploy additional resources to compute real-time spending and halt resources when budgets are met (e.g. Lambda Functions reserved concurrency is set to 0). Check out the README for important details around how this works, and the potential impact for your applications. The project is looking for feedback, so take a look and let them know what you think.

ec2-flexibility-score-dashboard

ec2-flexibility-score-dashboard is a nice project that helps you to assess any configuration used to launch instances through an Auto Scaling Group (ASG) against the recommended EC2 best practices. It converts the best practice adoption into a “flexibility score” that can be used to identify, improve, and monitor the configurations (and subsequently, overall organisation level adoption of Spot best practices) which may have room to improve the flexibility by implementing architectural best practices.

The following illustration shows the EC2 Flexibility Score Dashboard:

example ec2 assessment dashboard

aws-s3-integrity-check

aws-s3-integrity-check this simple tool from Sonia García-Ruiz provides a Bash script to check the md5 integrity of a set of files that have previously been uploaded into an AWS S3 bucket. Detailed README on how this works together with plenty of examples and some limitations you should be aware of.

cls3

cls3 is a very handy tool from AWS Community Builder k.goto that helps you to CLear S3 Buckets. It empties (so deletes all objects and versions/delete-markers in) S3 Buckets or deletes the buckets themselves. You can check out the supporting blog post, Tool for fast deletion and emptying of S3 buckets (versioning supported)

Demos, Samples, Solutions and Workshops

cedarpy-example-hello-photos

cedarpy-example-hello-photos this is a sample project that AWS Community Builder Stephen Kuenzli that provides and example of how to build a Lambda Authorizer using Cedar Policy and cedarpy. If you check out the Videos section below, you can check out the Twitch session that Stephen did with my colleague Brandon that walks you through this demo.

kendra_retriever_samples

kendra_retriever_samples This repo contains a number of example code samples and supporting CloudFormation templates that help you work with Langchain and Amazon Kendra. It currently has samples for working with a Kendra retriever class to execute a QA chain for SageMaker, Open AI and Anthropic providers. To help you deploy this code and help you understand how it all works, you can follow along the blog post, Deploy self-service question answering with the QnABot on AWS solution powered by Amazon Lex with Amazon Kendra and large language models

example qna chatbot screenshot

AWS and Community blog posts

Community round up

We have another great selection of community originated content this week, covering a broad set of open source technologies. First up is AWS Hero Rehan van der Merwe taking a look at TypeScript Remote Procedure Call (or better known as tRPC). What is it I can hear you all asking. Over-fetching and under-fetching are common issues with RESTful APIs. Like GraphQL, tRPC allows you to use TypeScript to define and get only the data you need avoiding bloated responses and duplicate requests. Rehan dives deep in his post, AWS Lambda with tRPC and separate repos using OpenAPI that provides a detailed, hands on guide on how you can use tRPC, trpc-openapi (OpenAPI support for tRPC), and AWS CDK to deploy this on AWS Lambda. Be sure to check out the other posts Rehan has been publishing on this topic.

Josh Aas shared details of the first stable release of sudo-rs, a Rust rewrite of the critical sudo utility, in his post The First Stable Release of a Memory Safe sudo Implementation. This is a good example of the ongoing commitment from AWS to supporting the work of the Internet Security Research Group (ISRG) to improve the memory safety of critical open source tools used by developers.

Sphinx is a great tool for writing documentation, and something that I first got to grips with when contributing to the Apache Airflow project (which I blogged about a while back). AWS Community Builder Julian Michel has put together How to automatically release Sphinx documentation using CDK Pipelines and a custom CodeBuild image that describes how to publish Sphinx projects using CDK pipelines. Very nice indeed.

overview of Sphinx in CDK Pipeline architecture

Next up is Olawale Olaleye with his post, Building an Amazon EKS Cluster Preconfigured to Run High Traffic Microservices which is a nice tutorial that shows you how you can deploy high traffic Kubernetes workloads on Amazon EKS. Staying in Cloud Native land, we have Shuting Zhao who wrote, Verifying images in a private Amazon ECR with Kyverno and IAM Roles for Service Accounts (IRSA) that shows how you can securely verify your container images using Kyverno, a CNCF policy engine designed for Kubernetes. Make sure you read this one. To wrap up the Kubernetes content in this section we have my colleague Abhishek Gupta who has put togehter Simplifying Your Kubernetes Infrastructure With CDK8s, that shares details from his talk on how you can use CDK for Kubernetes, or CDK8s, an open-source CNCF project that helps represent Kubernetes resources and application as code (not YAML!). There is not enough content on this project, so make sure you check that out too.

To finish up this weeks community round up, we have a couple of data related posts. First up is my good friend Suman Debnath who has put together, The Ultimate Guide to Running Apache Spark on AWS where he helps you explore the various decision-making questions to help developers navigate the options and choose the most suitable AWS service for your Spark workloads. Finally we have AWS Hero Elliott Cordo who writes about one of my favourite open source projects, Apache Airflow, in The Wrath of Unicron – When Airflow Gets Scary. And whilst this does sound like an episode of Star Trek, I can assure you it is well worth reading as it provides a nice approach on how you can SNS and SQS to orchestrate your workflows across orchestrators (multiple Airflow environments). Whilst this might not be suitable for every use case, posts like this provide useful ideas to keep in your back pocket should the need arise.

Apache Flink

I was super happy with the announcement last week that we renamed Amazon Kinesis Data Analytics to Amazon Managed Service for Apache Flink. The name change is effective in the AWS Management Console, documentation, and service webpages. There are no other changes, including to service endpoints, APIs, the AWS Command Line Interface (AWS CLI), the AWS Identity and Access Management (IAM) access policies, Amazon CloudWatch metrics, or the AWS Billing console dashboard. Your existing applications will continue to work as they did previously. My colleague Channy Yun has put together everything you need to know in the blog post, Announcing Amazon Managed Service for Apache Flink Renamed from Amazon Kinesis Data Analytics

aws console rename for apache flink

Apache Spark

In Monitor Apache Spark applications on Amazon EMR with Amazon Cloudwatch Le Clue Lubbe demonstrates how to publish detailed Spark metrics from Amazon EMR to Amazon CloudWatch. By default, Amazon EMR sends basic metrics to CloudWatch to track the activity and health of a cluster. Spark’s configurable metrics system allows metrics to be collected in a variety of sinks, including HTTP, JMX, and CSV files, but additional configuration is required to enable Spark to publish metrics to CloudWatch. Read this post to see how you can configure those metrics, and produce nice. looking dashboards in Amazon CloudWatch. [hands on]

example CloudWatch dashboard from Apache Spark data

There was more content on this topic last week, and in Monitor your Databricks Clusters with AWS managed open-source Services Munish Dabra, Lucas Vieira Souza da Silva, and Rajiv Upadhyay explored how you can leverage AWS managed open-source services to monitor your Apache Spark workloads running on Databricks clusters. [hands on]

overview of dashboard for databricks workloads

Other posts and quick reads

architecture of using Amazon File Cache and AWS ParallelCluster

dashboard in opensearch showing security data

setting up openid with gitlab on aws info graphic

Quick updates

PostgreSQL

Amazon Relational Database Service (RDS) for PostgreSQL now supports the Rust programming language as a new trusted procedural language in PostgreSQL major versions 13 and 14, expanding support for Rust from major version 15. This helps you build high performance user-defined functions to extend PostgreSQL for compute-intensive data processing. Rust combines the performance and resource efficiency of compiled languages like C with mechanisms that limit the risks from unsafe memory use. As a PostgreSQL trusted procedural language, PL/Rust provides memory safety so that an unprivileged user can run code in the database with minimal risk of crashing the database due to a software defect that corrupts memory. Developers can also package PL/Rust code as a Trusted Language Extension for PostgreSQL to run on Amazon RDS.

PL/Rust version 1.2.3 with crate support for aes, ctr, and rand is available on database instances in Amazon RDS running PostgreSQL 13.12 and higher, PostgreSQL 14.9 and higher, and 15.2-R2 and higher in all applicable AWS Regions

AWS Amplify

Android, Swift, and Flutter libraries now support Time-Based One-time Passwords (TOTP) as a multi-factor authentication (MFA) method. This feature enables developers to provide their users with a secure option for validating a user’s identity after they provide their username and password. Users of apps with TOTP enabled can register their apps with an Authenticator app such as Google Authenticator, Authy, or the Microsoft Authenticator app. After a user provides their username or password, they would then be presented with a challenge to complete their sign in by providing the code generated by their Authenticator app.

Check out the blog post AWS Amplify supports Time-Based One-Time Password (TOTP) for MFA on Android, Swift, and Flutter, where Abdallah Shaban provides a hands on guide through this new feature. [hands on]

demo of aws amplify and totp

Kubernetes

The Amazon VPC Container Networking Interface (CNI) Plugin now supports the Kubernetes NetworkPolicy resource. Customers can use the same open-source Amazon VPC CNI to implement both pod networking and network policies to secure the traffic in their Kubernetes clusters. This reduces the need to run additional software for network access controls and will work alongside all existing VPC CNI capabilities.

By default, in Kubernetes, any pod can talk to any other pod within a cluster with no restriction. For better network isolation, Kubernetes NetworkPolicy allows cluster administrators to secure access to and from applications by defining which entities a pod is allowed to communicate with and vice-versa. However, this requires customers to use additional software to implement NetworkPolicy, often resulting in operational overhead and cost to install and maintain those third party plugins.

With support for NetworkPolicy in Amazon VPC CNI, customers running Kubernetes on AWS can now allow or deny traffic between their pods based on label selectors, namespaces, IP blocks, and ports with minimal overhead. With native VPC integration, they can secure their applications using standard components including security groups, and network access control lists (ACL), as part of additional defence-in-depth measures. In addition, customers can trace and troubleshoot configured policies at a cluster and node level using the Amazon VPC CNI plugin. Starting with VPC CNI v1.14, NetworkPolicy support is available on new clusters running Kubernetes version 1.25 and above but turned off by default at launch.

Srinivas Jasti and Sheetal Joshi have put together a detailed blog post, Amazon VPC CNI now supports Kubernetes Network Policies where they demonstrate how you can enforce fine-grained control over communication, isolate workloads, and enhance the overall security of your AWS Kubernetes clusters—all without the need to manage third-party network policy plugins.

MySQL and MariaDB

Amazon Relational Database Service (Amazon RDS) Optimized Writes now supports m6i and m6g database (DB) instances. With Amazon RDS Optimized Writes you can improve the write throughput for Amazon RDS for MySQL and MariaDB workloads by up to 2x at no additional cost. This is especially useful for write-intensive database workloads, commonly found in applications such as digital payments, financial trading, and online gaming.

In MySQL or MariaDB, you are protected from data loss due to unexpected events, such as a power failure, using a built-in feature called the “doublewrite buffer” that takes up to twice as long, consumes twice as much I/O bandwidth, and reduces the throughput and performance of your database. Amazon RDS Optimized Writes protects you from data loss by writing only once. With Optimized Writes you can improve write throughout by up to 2x at no additional cost.

Amazon RDS Optimized Writes is available as a default option from RDS for MySQL version 8.0.30 and higher, and RDS for MariaDB version 10.6.10 and higher.

Lustre

Amazon FSx for Lustre, a fully managed service that makes it easy and cost effective to launch, run, and scale the world’s most popular high-performance file system, now supports project quotas. With project quotas, you can group multiple files or directories on your file system into a project, and monitor storage consumption on a per-project basis. Project quotas are ideal for storage administrators who manage file systems that serve multiple projects or teams who want to ensure that no project exceeds its allocated storage capacity.

Until today, you could set and enforce user- and group-level storage consumption using user quotas and group quotas. With project quotas, you can also set and enforce storage limits based on the number of files or storage capacity consumed by a specific project. You can set a hard limit to prevent projects from consuming additional storage after exceeding their project quota, or set a soft limit that provides users with a grace period to complete their workloads before converting into a hard limit.
Support for project quotas is now available at no additional cost on all Amazon FSx for Lustre file systems running on Lustre version 2.15.

OpenSearch

AWS User Notifications lets you centrally setup and view notifications from AWS services, such as Amazon OpenSearch Service, AWS Health events, Amazon CloudWatch alarms, or EC2 Instance state changes, in a consistent, human-friendly format. Last week it was announced that you could now integrate Amazon OpenSearch Serverless with AWS User Notifications. OpenSearch Serverless is the serverless option for Amazon OpenSearch Service that makes it simple for you to run search and analytics workloads without having to think about infrastructure management.

If you are looking for more details on how you might implement this, then Raj Ramasubbu has you covered in his post, Monitoring Amazon OpenSearch Serverless using AWS User Notifications.

example aws user notification integration with aws opensearch

Videos of the week

Building a simple Lambda Authorizer using cedarpy

Join my colleague Brandon Carroll and AWS Community Builder Stephen Kuenzli as they take a look at building an AWS Lambda authoriser using Stephen’s open source project, cedarpy. This is something I featured in last weeks newsletter, and that I used my self in my own Python based application.

Check it out over at Twitch, on this link.

How to reduce cold starts for Java Serverless applications in AWS

Check out Vadym Kazulkin’s session at FroSCon as he looks at the best practices, features and possibilities AWS offers for the Java developers to reduce the cold start times like GraalVM Native Image and AWS Lambda SnapStart based on CRaC (Coordinated Restore at Checkpoint) project.

Level Up with AWS SAM: The Ultimate Serverless Toolkit!

One for all you .NET developers, Brad Knowles takes you on a journey of introducing the AWS Serverless Application Model (SAM) toolkit into your serverless development workflow. You will see how you can take a C# .NET API from File…New Project to a fully deployed AWS application using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. While it may seem like magic, Brad digs into the details to demystify that magic so you walk away with a full understanding of the process. No application development journey is complete without testing. AWS SAM has you covered here as well. This session explores SAM’s secret weapon to aid in keeping that dev-test-deploy feedback loop as tight as possible. Great stuff!

Open Source Brief

Now featured every week in the AWS Community Radio show, grab a quick five minute recap of the weekly open source newsletter from yours truly. Last week’s issue is featured in this video.

Check out the playlist here.

Build on Open Source

For those unfamiliar with this show, Build on Open Source is where we go over this newsletter and then invite special guests to dive deep into their open source project. Expect plenty of code, demos and hopefully laughs. We have put together a playlist so that you can easily access all (sixteen) of the episodes of the Build on Open Source show. Build on Open Source playlist.

We are currently planning the third series – if you have an open source project you want to talk about, get in touch and we might be able to feature your project in future episodes of Build on Open Source.

Events for your diary

This week, check out the Developer Webinar series, where we have three great open source topics for you. It is online, so there is still time for you to check it out.

If you are planning any events in 2023, either virtual, in person, or hybrid, get in touch as I would love to share details of your event with readers.

Developer Webinar Series, Open Source At AWS
Online, 7th September 11am – 2pm AEST

As part of the Developer Webinar series, we are delighted to showcase three sessions that look at open source on AWS. We have Aish Gunasekar who will be talking about “Leveraging OpenSearch for Security Analytics”. I will be doing a talk on Cedar, in my session “Next generation Authz with Cedar”, and to wrap things up we have Keita Watanabe who will be looking at “Scaling LLM/GenAI deployment with NVIDIA Triton on Amazon EKS”. The sessions are technical deep dives, and there will be Q&A as well.

Jump over to the registration page and sign up, and hope to see many of you there.

Building ML capabilities with PostgreSQL and pgvector extension
YouTube, 14th September 4pm UK time

Generative AI and Large Language Models (LLMs) are powerful technologies for building applications with richer and more personalized user experiences. Application developers who use Amazon Aurora for PostgreSQL or Amazon RDS for PostgreSQL can use pgvector, an open-source extension for PostgreSQL, to harness the power of generative AI and LLMs for driving richer user experiences. Register now to learn more about this powerful technology.

Watch it live on YouTube.

Build ML into your apps with PostgreSQL and the pgvector extension
YouTube, 21st September 4pm UK time

This office hours session is a follow up for those who attended the fireside chat titled “Building ML capabilities into your apps with PostgreSQL and the open-source pgvector extension”. Others are also welcome. Office hours attendees can ask questions related to this topic. Application developers who use Amazon Aurora for PostgreSQL or Amazon RDS for PostgreSQL can use pgvector, an open-source extension for PostgreSQL, to harness the power of generative AI and LLMs for driving richer user experiences. Join us to ask your questions and hear the answers to the most frequently asked questions about the pgvector extension for PostgreSQL.

Watch it live on YouTube.

Open Source Summit, Europe
September 19th-21st, Bilboa Spain

“Open Source Summit is the premier event for open source developers, technologists, and community leaders to collaborate, share information, solve problems, and gain knowledge, furthering open source innovation and ensuring a sustainable open source ecosystem. It is the gathering place for open-source code and community contributors.” You will find AWS as well as myself at Open Source Summit this year, so come by the AWS booth and say hello – from the glimpses I have seen so far, it is going to be awesome! Find out more at the official site, Open Source Summit Europe 2023.

OpenSearchCon
Seattle, September 27-29, 2023

Registration is now open source OpenSearchCon. Check out this post from Daryll Swager, Registration for OpenSearchCon 2023 is now open! that provides you with what you can expect, and resources you need to help plan your trip.

CDK Day, 2023
Online, 29th September 2023

Back for the fourth instalment, this Community led event is a must attend for anyone working with infrastructure as code using the AWS Cloud Development Kit (CDK). It is intended to provide learning opportunities for all users of the CDK and related libraries. The event will be live streamed on YouTube, and you check more at the website, CDK Day

Open Source India
October 12-13th, NIMHANS Convention Center, Bengaluru

One of the most important open source events in the region, Open Source India will be welcoming thousands of attendees all to discuss and learn about open source technologies. I will be there too, doing a talk so I would love to meet with any of you who are also planning on attending. Check out more details on their web page, here.

All Things Open
October, 15th-17th, Raleigh Convention Center, Raleigh, North Carolina

I will be attending and speaking at All Things Open, looking at Apache Airflow as an container orchestrator. I will be there with a bunch of fellow AWS colleagues, and I hope to meet some of you there. Check us out at the AWS booth, where you will find me and the other AWS folk throughout the event. Check out the event and sessions/speakers at the official webpage for the event, AllThingsOpen 2023

Cortex
Every other Thursday, next one 16th February

The Cortex community call happens every two weeks on Thursday, alternating at 1200 UTC and 1700 UTC. You can check out the GitHub project for more details, go to the Community Meetings section. The community calls keep a rolling doc of previous meetings, so you can catch up on the previous discussions. Check the Cortex Community Meetings Notes for more info.

OpenSearch
Every other Tuesday, 3pm GMT

This regular meet-up is for anyone interested in OpenSearch & Open Distro. All skill levels are welcome and they cover and welcome talks on topics including: search, logging, log analytics, and data visualisation.

Sign up to the next session, OpenSearch Community Meeting

Stay in touch with open source at AWS

Remember to check out the Open Source homepage to keep up to date with all our activity in open source by following us on @AWSOpen

The post AWS open source newsletter, #172 appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/04/aws-open-source-newsletter-172/feed/ 0
Ctrl+Z to the Past: Time-Travelling to Your First Code Line https://prodsens.live/2023/08/16/ctrlz-to-the-past-time-travelling-to-your-first-code-line/?utm_source=rss&utm_medium=rss&utm_campaign=ctrlz-to-the-past-time-travelling-to-your-first-code-line https://prodsens.live/2023/08/16/ctrlz-to-the-past-time-travelling-to-your-first-code-line/#respond Wed, 16 Aug 2023 07:24:41 +0000 https://prodsens.live/2023/08/16/ctrlz-to-the-past-time-travelling-to-your-first-code-line/ ctrl+z-to-the-past:-time-travelling-to-your-first-code-line

We’re going back to coding school with Nostalgia Bytes this week! Don’t forget your TI calculators, Trapper Keepers,…

The post Ctrl+Z to the Past: Time-Travelling to Your First Code Line appeared first on ProdSens.live.

]]>
ctrl+z-to-the-past:-time-travelling-to-your-first-code-line

We’re going back to coding school with Nostalgia Bytes this week! Don’t forget your TI calculators, Trapper Keepers, Lisa Frank folders, and USB drives. Each decade has its own story to tell. So get ready to relive the past and share your nostalgic memories with fellow developers!

🤖 Time travel back to your first line of code: What would your past self say to your present self?

Follow the DEVteam for more discussions and online camaraderie!

The post Ctrl+Z to the Past: Time-Travelling to Your First Code Line appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/08/16/ctrlz-to-the-past-time-travelling-to-your-first-code-line/feed/ 0
Product Launch Plan For SaaS: How to Launch Products Successfully https://prodsens.live/2023/08/16/product-launch-plan-for-saas-how-to-launch-products-successfully/?utm_source=rss&utm_medium=rss&utm_campaign=product-launch-plan-for-saas-how-to-launch-products-successfully https://prodsens.live/2023/08/16/product-launch-plan-for-saas-how-to-launch-products-successfully/#respond Wed, 16 Aug 2023 07:24:33 +0000 https://prodsens.live/2023/08/16/product-launch-plan-for-saas-how-to-launch-products-successfully/ product-launch-plan-for-saas:-how-to-launch-products-successfully

There’s a vast gap between a great idea and bringing that idea into reality: and there’s no better…

The post Product Launch Plan For SaaS: How to Launch Products Successfully appeared first on ProdSens.live.

]]>
product-launch-plan-for-saas:-how-to-launch-products-successfully

There’s a vast gap between a great idea and bringing that idea into reality: and there’s no better example than a product launch plan.

If you get your product launch plan wrong, all the hard work and effort of building your product might be for nothing.

In this article, we’re going to make sure that doesn’t happen. We’ll give you the step-by-step checklist you need to create a solid product launch plan – one that will guarantee a successful launch.

Ready to get started?

TL;DR

  • A product launch plan is typically a document outlining the steps in bringing a new product to market, typically containing three key phases: pre-launch, the launch itself, and post-launch.
  • There are several different varieties of launch plans. A soft launch (or dark launch) is all about releasing to a limited audience, a minimal launch involves releasing with a limited set of features, and a full launch means releasing your new product in its entirety to the broadest possible audience.
  • To create a successful launch plan, you’ve got to start with solid foundations: the pre-launch.
  • Start by conducting extensive market research to understand the landscape, map out the needs of individual users to understand pain points, develop a clear value proposition, and then define key performance indicators to measure progress against.
  • Beta testing can help you identify issues ahead of a wider go-live – and you can then plan a launch date, time, and budget.
  • Next up, is the launch itself. This is all about ensuring you launch in several places at once rather than relying on one platform.
  • Post-launch is a crucial phase too. It’s a chance to evaluate the performance of your launch plan, collect feedback (and act on it), use analytics to dig into user behavior, and make sure you’ve got your retention strategy clearly defined.
  • None of this is possible without the right tool for the job – Userpilot can help.
  • Our powerful product adoption platform enables you to segment users and deploy a range of customizable in-app messages, and gather in-app feedback using ready-made survey templates.
  • Book a demo with our team to learn more.

What is a product launch plan?

A product launch plan is typically a document or visual reference point outlining the steps in bringing a new product to market.

It typically includes the following sections:

  • Pre-launch phase. This is all about activities that’ll set you up for success: you’ll define your launch strategy – aligned with part of a broader marketing strategy – make sure you properly understand your market and get the product positioning right. The critical factor here is to nail the go-to-market strategy before you start.
  • Launch phase. This is all about actually getting your product live: it’s the point your launch strategy becomes reality. Ultimately, you want to launch your product to potential customers successfully – the launch phase is where you kick off a whole array of launch activities and begin your campaign in earnest.
  • Post-launch phase. There’s not really a ‘final stage’ of a product launch. You need to keep going: assess your performance, reflect on what went well and what didn’t, and use that knowledge to refine any future launch plan you put together. Your sales team will be in overdrive.

A well-crafted product launch plan can help ensure that a new product is successfully launched – and ultimately achieves its business goals. Any product launch plan will require extensive market research, careful planning, and execution – it’s no mean feat for any product manager.

Visual of product launch phases
Understand the key activities at different stages of your launch.

Different types of product launch plans

Context is everything.

While the key details of each product launch will be unique – from the target market, the launch timeline, the distribution channels and more – there will typically be common types of launch plan you can draw inspiration from.

  • Soft launch plan. Sometimes referred to as a ‘dark launch’, this involves releasing the product to a limited audience before making it more widely available. They are often seen as a much less risky way to launch a new product than a traditional launch: you can get user feedback and make improvements to the product without it being too out in the open.
  • Minimal launch plan. This is another way of limiting the scope of your release: releasing the product with a limited set of features and functionality. This is done to reduce the risk and cost of the launch, while still allowing the company to gather feedback from users and make improvements to the product before it is released more widely.
  • Full-scale launch plan. A full-scale launch involves releasing the whole product to the general public with all of its features and functionality. This is the most common type of product launch plan, typically used for products that are considered to be mature and ready for the market. You’ll have the widest reach, but of course, you’re more at risk of external factors.

Steps to create a successful product launch plan

Although every product will be different, a successful product launch will typically have a common series of steps. Let’s break them down phase by phase.

Pre-launch phase

As discussed, this is the stage where you get yourself and your team organized. It’s all about devising your marketing plans, gathering info, and preparing for what’s to come.

“You can’t build a great building on a weak foundation.”

Here’s how to nail the pre-launch:

Conduct market research and competitive analysis

The better you understand your target market, the more likely the chance of launch success.

Market research gives you a fantastic opportunity to learn from the mistakes of your competitors and spot opportunities for differentiation. You draw on success stories, figure out what might work for you, and help the team stay focused.

Dig into your target user personas

Once you understand the target market, you can focus more deeply on the individual target customer.

A persona is a detailed visualization of your target audience, including their challenges, their motivations, their pain points, and ultimately how your product can help

Don’t build personas based on guesswork and assumptions: gather valuable data through research, focus groups, direct surveys, and observing behavior.

Visual of user persona Product Launch Plan
This persona for product marketers can help you align needs to your app’s key features.

Define your value proposition and develop a messaging strategy

The information and thinking you’ve worked through will help you solidify your core product positioning, define a unique value proposition: and build a messaging strategy around that.

Positioning is a subtle and tricky thing to get right. Good positioning statements capture needs, user requirements, differentiating factors, and how features map to outcomes.

Visual of product positioning benefits
The key to great positioning is matching your offering to user needs.

Set SMART goals and tie them to key performance indicators

Product launches shouldn’t be random: they’ll typically tie to a broader business strategy. To make sure you nail your launch, you should be able to clearly articulate the business objectives they contribute to.

Visual of SMART goal setting
Don’t set vague, unachievable goals.

SMART goals are a fantastic way to go.

Remember the old adage from management expert Pete Drucker:

“What gets measured gets managed.”

Here are a few examples of goals you might set when launching a new product:

  • Improving customer satisfaction scores by X%;
  • Increasing sales by X% in the first month;
  • Increasing sign-ups by X% in the first three months.

Don’t just think short term: your goals should be measurable so that you can monitor your progress over time and work out if you’re moving in the right direction.

Some KPIs that might help you:

Develop a product launch marketing plan

By now, you’ve got a solid understanding of the market, your individual users, your value proposition, and how you’ll measure success.

Building a product launch marketing plan helps you gather all of that information and translate it into an actionable plan. Choose your marketing channels, allocate a budget, decide where you’ll focus your efforts, and how to chunk the work up.

A good plan should have critical milestones you can measure progress against and help drive the team forward.

Create promotional marketing materials to generate buzz before the launch date

You can’t just put a plan out there and expect it to work automatically.

It takes type and effort to build hype. That might be through blogs, social media campaigns, other forms of digital marketing, social media posts, or even influencer marketing.

Which will deliver the most success depends on a huge range of factors: you need to experiment to figure out what works in your context.

Screenshot of social media marketing materials
Build awareness of your product across multiple platforms.

Perform beta testing before the product launches

Beta testing is a powerful concept.

It’s essentially a dress rehearsal: a chance to release real features and put them in front of real users but without the risk of a mass launch. You’ll typically also spot any potential showstopper bugs before they escape into the wider user base.

Another benefit is that beta users will typically be ‘power users’: interested, engaged, and tech-savvy. They’ll provide great feedback, help inform key elements of your launch – and ultimately help the campaign succeed.

Screenshot of beta tester popup Product Launch Plan
Segment ideal candidates for beta testing and trigger in-app messages targeting them.

Decide on the launch date, timeline, and budget

The final step – and an important one!

This is where the intricate details come to the fore: logistics can trip any launch, particularly when you’re dealing with various complex tech stacks. You need to resolve dependencies, get the sequencing right, and ensure there are no blockers in the way.

Done right, this will have a huge impact on launch productivity.

Visual of product launch plan
Put a clear, step-by-step plan together.

Launch phase

You’ve laid the foundations. Now we’re at the main event: the launch itself.

Launch your product on multiple platforms

The most important thing here is not to rely on a single platform as the only place you launch your product. You need to increase awareness however you can: Indiehackers, Reddit, LinkedIn, and ProductHunt are all excellent examples.

Screenshot of Userpilot launch on ProductHunt
Don’t become reliant on just one platform.

Post-launch phase

Once you’ve got your product out there, and successfully got it in front of users, it’s time to move onto the post-launch phase.

Evaluate the performance of your marketing campaigns

If you don’t reflect on your performance, you’ll never improve.

You need to carve out the time and effort to analyze your marketing efforts, compare your performance against your objectives, and if you fell short – identify where.

For example, let’s say you launched a series of ads on Facebook. You might have set a goal of converting 10% of viewers who came across it: if you only generate half as many sign-ups as you planned, you shouldn’t continue on with a flawed strategy.

Maybe you could experiment with format, content, messaging tone, and platform – there are many ways to tweak your approach. The important thing is a mindset of evaluation and experimentation.

Collect customer feedback to improve your product

Gathering direct user feedback – and using it to target enhancements – is one of the very best ways to improve customer satisfaction.

It’s great product practice to have a short, tight feedback loop between gathering ideas from your customers and transforming them into reality.

It means you’re not relying on guesswork or hunches to define your roadmap: everything you work on should be clearly linked to a direct user need.

Screenshot of Userpilot feedback survey
Feedback is easy to capture with Userpilot.

Analyze in-app customer behavior for actionable insights

Next, you should use analytics to better understand customer behavior.

What’s the usage frequency of a particular feature? Where are you seeing drop-off points in the journey? Which features generate the most friction?

All of these actionable insights give you clues: you’ll know which areas to investigate further, and where to focus your efforts next.

Screenshot of Userpilot analytics
Analyze customer behavior with Userpilot.

Create a retention marketing strategy and focus on keeping the acquired customers

A campaign launch – and capturing new customers for a venture – is of absolutely no use if you don’t manage to keep hold of the customers you’ve already acquired.

It’s a well-known fact that in the SaaS world, the cost of acquisition far exceeds the cost of retention: most companies make money from retaining their users, rather than constantly battling churn.

To get this right, you need to make sure you include retention strategies right from the start rather than trying to shoehorn them in at the end: they should be considered in the earliest possible draft of your product launch plan.

How Userpilot can help you with your product launch strategy

None of this is possible without the right tool for the job.

Userpilot is a powerful product adoption platform that can help you nail any product launch. Here’s how it can help.

Segment customers for a personalized experience

You shouldn’t treat your customers as one, homogenous group. Typically, they’ll fit into clear categories or ‘segments’ – that might be based on acquisition channel, industry, the date they’ve signed up, answers to welcome surveys, key jobs to be done and more.

Userpilot can help you create fine-grained user segments, so you can trigger personalized experiences bespoke to each user group.

That’ll give you the best chance of delighting your customers – and hopefully retaining their business.

Screenshot of Userpilot segmentation
Segment users by a range of categories with Userpilot.

Create in-app messaging with different UI elements

Userpilot makes it simple to launch a whole host of different UI elements – so you can choose the best and most relevant option based on the context.

Examples of UI patterns to choose from include:

  • Modals. An engaging way to fill up most of the screen and capture user attention – for example, for a new feature announcement.
  • Tooltips. Provide contextually relevant guidance to help users figure out how to unlock value from your product.
  • Slideouts. A handy way to share extra information with your users without detracting from the main customer journey.
  • Checklists. Drive users toward the next most valuable action in a visually engaging, interactive to-do list.
Screenshot of Userpilot UI patterns
Choose the right in-app messaging for the job.

Collect customer feedback in-app

We’ve discussed how important user feedback is. Userpilot provides you with a huge array of customizable templates to get you off to a fast start.

Remember, you can trigger surveys contextually based on custom events: a proven way to gather feedback at relevant points in the user journey.

Animation of in-app surveys
Easily gather customer feedback with Userpilot.

Conclusion

That concludes our in-depth guide to the world of product launches.

Hopefully, you and your marketing team have lots of useful information to absorb and apply to your own product: what a product launch plan is, how to put one together, different types of launches, and more.

If you want to get started putting an impressive product launch plan together yourself – and build product experiences code-free – simply book a demo call with our team, and get started!

The post Product Launch Plan For SaaS: How to Launch Products Successfully appeared first on Thoughts about Product Adoption, User Onboarding and Good UX | Userpilot Blog.

The post Product Launch Plan For SaaS: How to Launch Products Successfully appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/08/16/product-launch-plan-for-saas-how-to-launch-products-successfully/feed/ 0
Single Source of Truth https://prodsens.live/2023/08/02/single-source-of-truth/?utm_source=rss&utm_medium=rss&utm_campaign=single-source-of-truth https://prodsens.live/2023/08/02/single-source-of-truth/#respond Wed, 02 Aug 2023 10:25:48 +0000 https://prodsens.live/2023/08/02/single-source-of-truth/ single-source-of-truth

1. The Principle of Single Source of Truth (SPOT) The SPOT, or Single Point of Truth, principle is…

The post Single Source of Truth appeared first on ProdSens.live.

]]>
single-source-of-truth

1. The Principle of Single Source of Truth (SPOT)

The SPOT, or Single Point of Truth, principle is a better and more semantically-focused interpretation of DRY (Don’t Repeat Yourself). The principle suggests that for any given piece of information, there should only be one authoritative location where that data is defined or set. For instance, if there’s a rule that pizzas need at least one topping, there should be a single place in the code where that condition is articulated. By maintaining a single source of truth, it ensures that when changes need to be made, they aren’t sporadically made in one place but not the others.

2. The Significance of Code Locality

A related concept to SPOT is code locality. Code locality refers to the organization of related code segments, with the goal of enabling humans to discover and remember related code easily. The idea is to keep related elements as close together as possible in the code, especially when multiple sources of truth are necessary. This organization strategy aids in code comprehension and enhances the efficiency of tracking and implementing changes in the codebase.

3. Spatial Awareness in Code

The saying, “if things tend to change together, they should be closer together,” also aligns with the principle of code locality. Structuring the code in a way that elements likely to change simultaneously are placed together can significantly improve code maintainability. This approach, in contrast to using layers as a primary organizational method, prevents scattering related elements across different top-level directories.

4. Simple Over Complex Structures

Another aspect of good coding practice that ties in with the SPOT principle is the preference for simpler structures. When structuring the codebase, linear arrangements, or ‘lists,’ are often more favorable than tree or graph structures. They are easier to understand, and they help prevent unnecessarily complex and convoluted code that could lead to difficulties in maintenance and comprehension.

I’m building a game about software development, incorporating real principles that have proven their usefulness throughout my 20+ years of career experience.

I’m looking for feedback from Alpha version – try game now.

5. The Principle of Least Power

The principle of least power could be seen as an overarching guideline to the practices described above. This principle suggests that the simplest solution capable of effectively solving a problem is usually the best choice. As applied to code structures, while the graph data structure is highly versatile, it should only be used when simpler data structures fall short, minimizing complexity.

6. The Advantage of Flat Data Over Tree Data

While flat data are simpler to handle than tree data, the real distinction comes when processing the data. The reason for this is that trees invite recursion, and as the complexity of the logic increases, it can be increasingly difficult to understand what’s going on. Conversely, processing lists iteratively can be less complex, and refactoring recursive code to iterative code can often reveal or solve hidden bugs.

7. Understanding the Balance

As a junior developer, understanding the balance between too much and too little abstraction is key. While the SPOT principle and related concepts can greatly aid in creating maintainable, efficient code, it’s essential to know when to apply these principles. Overcomplicating a simple problem with unnecessary abstractions or prematurely refactoring can be as detrimental as not applying the principles at all. Ultimately, the goal is to write code that effectively addresses the core problem, while also being easy to maintain and understand.

Follow me on X(ex-Twitter)

The post Single Source of Truth appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/08/02/single-source-of-truth/feed/ 0
Differentiating your message with world-class competitive intelligence https://prodsens.live/2023/07/17/differentiating-your-message-with-world-class-competitive-intelligence/?utm_source=rss&utm_medium=rss&utm_campaign=differentiating-your-message-with-world-class-competitive-intelligence https://prodsens.live/2023/07/17/differentiating-your-message-with-world-class-competitive-intelligence/#respond Mon, 17 Jul 2023 10:25:58 +0000 https://prodsens.live/2023/07/17/differentiating-your-message-with-world-class-competitive-intelligence/ differentiating-your-message-with-world-class-competitive-intelligence

I’m here to share how you can differentiate your product and company with competitive intelligence (CI). Spoiler alert:…

The post Differentiating your message with world-class competitive intelligence appeared first on ProdSens.live.

]]>
differentiating-your-message-with-world-class-competitive-intelligence

I’m here to share how you can differentiate your product and company with competitive intelligence (CI).

Spoiler alert: if you don’t understand your competition, it’s really tough to highlight what sets your product apart.

The post Differentiating your message with world-class competitive intelligence appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/07/17/differentiating-your-message-with-world-class-competitive-intelligence/feed/ 0