Spenser Skates, Author at ProdSens.live https://prodsens.live/author/spenser-skates/ News for Project Managers - PMI Wed, 26 Jun 2024 14:20:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Spenser Skates, Author at ProdSens.live https://prodsens.live/author/spenser-skates/ 32 32 Using PHP Attributes to Create and Use a Custom Validator in Symfony https://prodsens.live/2024/06/26/using-php-attributes-to-create-and-use-a-custom-validator-in-symfony/?utm_source=rss&utm_medium=rss&utm_campaign=using-php-attributes-to-create-and-use-a-custom-validator-in-symfony https://prodsens.live/2024/06/26/using-php-attributes-to-create-and-use-a-custom-validator-in-symfony/#respond Wed, 26 Jun 2024 14:20:46 +0000 https://prodsens.live/2024/06/26/using-php-attributes-to-create-and-use-a-custom-validator-in-symfony/ using-php-attributes-to-create-and-use-a-custom-validator-in-symfony

Symfony, a leading PHP framework, is consistently updated to leverage modern PHP features. With PHP 8, attributes provide…

The post Using PHP Attributes to Create and Use a Custom Validator in Symfony appeared first on ProdSens.live.

]]>
using-php-attributes-to-create-and-use-a-custom-validator-in-symfony

Symfony, a leading PHP framework, is consistently updated to leverage modern PHP features. With PHP 8, attributes provide a new way to define metadata for classes, methods, properties, etc., which can be used for validation constraints. This blog post will guide you through creating and using a custom validator in Symfony to validate UK mobile number prefixes using PHP attributes.

What Are PHP Attributes?

PHP attributes, introduced in PHP 8, enable you to add metadata to various code elements, accessible via reflection. In Symfony, attributes can simplify defining validation constraints, making your code more concise and readable.

Example 1 – Creating a Custom Validator for UK Mobile Number Prefix

Let’s create a custom validator to check if a phone number has a valid UK mobile prefix (e.g., starting with ’07’).

Step 1: Define the Attribute

Create a new attribute class that defines the custom constraint.

// src/Validator/Constraints/UkMobile.php
namespace AppValidatorConstraints;

use Attribute;
use SymfonyComponentValidatorConstraint;

#[Attribute(Attribute::TARGET_PROPERTY | Attribute::TARGET_METHOD)]
class UkMobile extends Constraint
{
    public string $message = 'The number "{{ string }}" is not a valid UK mobile number.';
}

Step 2: Create the Validator

Next, create the validator with the logic to check if a phone number has a valid UK mobile prefix.

// src/Validator/Constraints/UkMobileValidator.php
namespace AppValidatorConstraints;

use SymfonyComponentValidatorConstraint;
use SymfonyComponentValidatorConstraintValidator;
use SymfonyComponentValidatorExceptionUnexpectedTypeException;

class UkMobileValidator extends ConstraintValidator
{
    public function validate($value, Constraint $constraint): void
    {
        if (null === $value || '' === $value) {
            return;
        }

        if (!is_string($value)) {
            throw new UnexpectedTypeException($value, 'string');
        }

        // Check if the number starts with '07'
        if (!preg_match('/^07[0-9]{9}$/', $value)) {
            $this->context->buildViolation($constraint->message)
                ->setParameter('{{ string }}', $value)
                ->addViolation();
        }
    }
}

Step 3: Apply the Attribute in an Entity

Use the UkMobile attribute in your entities to enforce this custom validation rule.

// src/Entity/User.php
namespace AppEntity;

use AppValidatorConstraints as AppAssert;
use DoctrineORMMapping as ORM;
use SymfonyComponentValidatorConstraints as Assert;

#[ORMEntity]
class User
{
    #[ORMId]
    #[ORMGeneratedValue]
    #[ORMColumn(type: 'integer')]
    private $id;

    #[ORMColumn(type: 'string', length: 15)]
    #[AssertNotBlank]
    #[AppAssertUkMobile]
    private $mobileNumber;

    // getters and setters
}

Step 4: Test the Validator

Ensure everything works correctly by writing some unit tests or using Symfony’s built-in validation mechanism.

// tests/Validator/Constraints/UkMobileValidatorTest.php
namespace AppTestsValidatorConstraints;

use AppValidatorConstraintsUkMobile;
use AppValidatorConstraintsUkMobileValidator;
use SymfonyComponentValidatorTestConstraintValidatorTestCase;

class UkMobileValidatorTest extends ConstraintValidatorTestCase
{
    protected function createValidator(): ConstraintValidator
    {
        return new UkMobileValidator();
    }

    public function testNullIsValid(): void
    {
        $this->validator->validate(null, new UkMobile());

        $this->assertNoViolation();
    }

    public function testValidUkMobileNumber(): void
    {
        $this->validator->validate('07123456789', new UkMobile());

        $this->assertNoViolation();
    }

    public function testInvalidUkMobileNumber(): void
    {
        $constraint = new UkMobile();

        $this->validator->validate('08123456789', $constraint);

        $this->buildViolation($constraint->message)
            ->setParameter('{{ string }}', '08123456789')
            ->assertRaised();
    }
}

Example 2 – Creating a Custom Validator for Glasgow Postcodes

In this example, we want to create a custom validator to check if a postcode is a valid Glasgow postcode. This could be used for professional trade services i.e. bark.com where a company only services certain areas.

Step 1: Define the Attribute

First, create a new attribute class to define the custom constraint.

// src/Validator/Constraints/GlasgowPostcode.php
namespace AppValidatorConstraints;

use Attribute;
use SymfonyComponentValidatorConstraint;

#[Attribute(Attribute::TARGET_PROPERTY | Attribute::TARGET_METHOD)]
class GlasgowPostcode extends Constraint
{
    public string $message = 'The postcode "{{ string }}" is not a valid Glasgow postcode.';
}

Step 2: Create the Validator

Next, create the validator with the logic to check if a postcode is a valid Glasgow postcode.

// src/Validator/Constraints/GlasgowPostcodeValidator.php
namespace AppValidatorConstraints;

use SymfonyComponentValidatorConstraint;
use SymfonyComponentValidatorConstraintValidator;
use SymfonyComponentValidatorExceptionUnexpectedTypeException;

class GlasgowPostcodeValidator extends ConstraintValidator
{
    public function validate($value, Constraint $constraint): void
    {
        if (null === $value || '' === $value) {
            return;
        }

        if (!is_string($value)) {
            throw new UnexpectedTypeException($value, 'string');
        }

        // Regex for validating Glasgow postcodes (starting with G)
        $pattern = '/^Gd{1,2}s?d[A-Z]{2}$/i';

        if (!preg_match($pattern, $value)) {
            $this->context->buildViolation($constraint->message)
                ->setParameter('{{ string }}', $value)
                ->addViolation();
        }
    }
}

Step 3: Apply the Attribute in an Entity

Use the GlasgowPostcode attribute in your entities to enforce this custom validation rule.

// src/Entity/Address.php
namespace AppEntity;

use AppValidatorConstraints as AppAssert;
use DoctrineORMMapping as ORM;
use SymfonyComponentValidatorConstraints as Assert;

#[ORMEntity]
class Address
{
    #[ORMId]
    #[ORMGeneratedValue]
    #[ORMColumn(type: 'integer')]
    private $id;

    #[ORMColumn(type: 'string', length: 10)]
    #[AssertNotBlank]
    #[AppAssertGlasgowPostcode]
    private $postcode;

    // getters and setters
}

Step 4: Test the Validator

Ensure everything works correctly by writing some unit tests or using Symfony’s built-in validation mechanism.

// tests/Validator/Constraints/GlasgowPostcodeValidatorTest.php
namespace AppTestsValidatorConstraints;

use AppValidatorConstraintsGlasgowPostcode;
use AppValidatorConstraintsGlasgowPostcodeValidator;
use SymfonyComponentValidatorTestConstraintValidatorTestCase;

class GlasgowPostcodeValidatorTest extends ConstraintValidatorTestCase
{
    protected function createValidator(): ConstraintValidator
    {
        return new GlasgowPostcodeValidator();
    }

    public function testNullIsValid(): void
    {
        $this->validator->validate(null, new GlasgowPostcode());

        $this->assertNoViolation();
    }

    public function testValidGlasgowPostcode(): void
    {
        $this->validator->validate('G1 1AA', new GlasgowPostcode());

        $this->assertNoViolation();
    }

    public function testInvalidGlasgowPostcode(): void
    {
        $constraint = new GlasgowPostcode();

        $this->validator->validate('EH1 1AA', $constraint);

        $this->buildViolation($constraint->message)
            ->setParameter('{{ string }}', 'EH1 1AA')
            ->assertRaised();
    }
}

Beyond Entities

Custom validators aren’t restricted to entities. They can be used to apply validation to properties and methods of any class you need, for example, if we wanted to use the GlasgowPostcode validator in a DTO object we could do something like

// src/DTO/PostcodeDTO.php
namespace AppDTO;

use AppValidatorConstraints as AppAssert;
use SymfonyComponentValidatorConstraints as Assert;

class PostcodeDTO
{
    #[AssertNotBlank]
    #[AppAssertGlasgowPostcode]
    private string $postcode;

    public function __construct(string $postcode)
    {
        $this->postcode = $postcode;
    }

    public function getPostcode(): string
    {
        return $this->postcode;
    }
}

and to check this DTO contains valid data we would make use of the validation service like

$postcodeDTO = new PostcodeDTO('G1 1AA');
$violations = $this->validator->validate($postcodeDTO);

Conclusion

Using PHP attributes to define custom validators in Symfony can enhance code readability and leverage modern PHP features. By following the steps outlined above, you can create robust, reusable validation logic that integrates seamlessly with Symfony’s validation system. This approach simplifies adding custom validations and keeps your code clean and maintainable.

Happy coding!

Originally published at https://chrisshennan.com/blog/using-php-attributes-to-create-and-use-a-custom-validator-in-symfony

The post Using PHP Attributes to Create and Use a Custom Validator in Symfony appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/26/using-php-attributes-to-create-and-use-a-custom-validator-in-symfony/feed/ 0
Why Do Some Software Projects Fail? https://prodsens.live/2024/06/03/why-do-some-software-projects-fail/?utm_source=rss&utm_medium=rss&utm_campaign=why-do-some-software-projects-fail https://prodsens.live/2024/06/03/why-do-some-software-projects-fail/#respond Mon, 03 Jun 2024 05:20:08 +0000 https://prodsens.live/2024/06/03/why-do-some-software-projects-fail/ why-do-some-software-projects-fail?

It’s not because of the tech stack, programming languages, or libraries. Choosing the right tech stack and libraries…

The post Why Do Some Software Projects Fail? appeared first on ProdSens.live.

]]>
why-do-some-software-projects-fail?

It’s not because of the tech stack, programming languages, or libraries.

Choosing the right tech stack and libraries is a critical decision for the success of a software project.

But, in my experience, software projects fail due to unclear expectations and communication issues. Interestingly enough, I’ve heard the same thing about marriages.

Even, I’d dare to say that unclear expectations are a communication issue too.

I’ve been in software projects that use Domain-Driven Design, Test-Driven Development, and Kubernetes and sometimes end up late and going off the rails. It’s not a tech problem. It’s always a people problem.

Software projects fail when:

  • Stakeholders fail to communicate their expectations.
  • Leaders fail to communicate changes in project goals and scope.
  • Leaders fail to communicate action plans.
  • Team members fail to communicate technical issues on time.

One of the most common communication issues is waiting until the day before a deadline to say you’ve been dealing with a coding issue for weeks. But, in the meantime, you kept saying everything was fine. That would piss off any leader or project manager.

The lesson here is always to ask ourselves: Who else should I communicate this to?

From my recent software projects, I’ve learned more about leadership, communication, and hiring than about any specific programming language or library.

Hey, there! I’m Cesar, a software engineer and lifelong learner. Visit my Gumroad page to download my ebooks and check my courses.

The post Why Do Some Software Projects Fail? appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/03/why-do-some-software-projects-fail/feed/ 0
What is Threat Detection and Response (TDR)? https://prodsens.live/2024/05/29/what-is-threat-detection-and-response-tdr/?utm_source=rss&utm_medium=rss&utm_campaign=what-is-threat-detection-and-response-tdr https://prodsens.live/2024/05/29/what-is-threat-detection-and-response-tdr/#respond Wed, 29 May 2024 14:20:43 +0000 https://prodsens.live/2024/05/29/what-is-threat-detection-and-response-tdr/ what-is-threat-detection-and-response-(tdr)?

Threat Detection and Response (TDR) is an essential component of cybersecurity, working alongside threat prevention to safeguard organizations…

The post What is Threat Detection and Response (TDR)? appeared first on ProdSens.live.

]]>
what-is-threat-detection-and-response-(tdr)?

What is Threat Detection and Response (TDR)?

Threat Detection and Response (TDR) is an essential component of cybersecurity, working alongside threat prevention to safeguard organizations from cyber threats. Despite robust preventive measures, attackers often breach defenses, necessitating a proactive approach to detect and respond to threats. TDR involves continuously monitoring networks and endpoints to swiftly identify anomalies and indicators of compromise. By leveraging advanced tools and strategies, TDR enhances an organization’s ability to combat cyber threats, extending beyond traditional prevention methods.

TDR employs advanced analytical techniques such as behavioral analysis and artificial intelligence (AI) to uncover elusive threats. When a threat is detected, a coordinated response is initiated to investigate, contain, and eradicate the threat while fortifying defenses against future incidents. This cyclical process of detection, response, and refinement is crucial for maintaining cyber resilience.

The TDR process is typically managed by a Security Operations Center (SOC) and unfolds in several stages. Detection involves using a suite of security tools to continuously monitor endpoints, networks, applications, and user activities to identify potential risks and breaches. Cyber threat-hunting techniques are also employed to uncover sophisticated threats. Upon identifying a potential threat, AI and other tools are used to confirm its authenticity, trace its origins, and assess its impact.

Containment involves isolating infected devices and networks to prevent the spread of the attack. The SOC then works to eliminate the root cause of the incident, removing the threat actor and addressing vulnerabilities to prevent recurrence. Once the threat is neutralized, systems are restored to normal operations. The incident is documented and analyzed to identify areas for improvement, and lessons learned are used to enhance the organization’s security posture.

TDR tools are designed to detect and mitigate a wide range of cyber threats. These include Distributed Denial-of-Service (DDoS) attacks, which overwhelm services with excessive traffic; malware, which steals data; phishing attempts, which trick users into divulging sensitive information; botnets, networks of compromised devices used for malicious purposes; ransomware, which encrypts and exfiltrates critical data; living-off-the-land attacks, where attackers use legitimate tools within the network; advanced persistent threats (APTs), prolonged and stealthy attacks targeting sensitive data; and zero-day threats, previously unknown vulnerabilities.

Effective TDR programs leverage several key features. Real-time monitoring of networks and endpoints detects anomalies early. Vulnerability management identifies and remediates weaknesses in infrastructure. Threat intelligence integration utilizes feeds to stay informed about the latest attack techniques. Sandboxing analyzes potentially malicious code in an isolated environment. Root cause analysis determines the underlying cause of incidents for effective remediation. Threat hunting proactively searches for indicators of compromise and anomalous activities. Automated response swiftly isolates and blocks threats, often integrated with Security Orchestration, Automation, and Response (SOAR) platforms.

To maximize the effectiveness of TDR, organizations should follow best practices. Regular training ensures all employees are equipped to recognize and respond to threats. Continuous improvement involves using post-incident evaluations to refine response procedures. Collaboration and communication foster teamwork within the security team and across departments. An incident response plan provides clear steps for containment, eradication, and recovery. AI integration enhances threat detection and response capabilities.

CloudDefense.AI offers a robust suite of tools designed to protect cloud infrastructures from cyber threats. With features like real-time threat detection, user behavior analysis, and security graph technology, CloudDefense.AI provides comprehensive visibility and protection. The platform’s AI-driven capabilities detect both known and unknown threats, prioritize risks based on their impact, and offer detailed graph-driven investigation tools for swift remediation. Additionally, CloudDefense.AI excels in detecting misconfigured APIs, preventing unauthorized access and data exposure.

For organizations looking to enhance their TDR capabilities, CloudDefense.AI represents a cutting-edge solution that streamlines detection and response efforts, ultimately strengthening cybersecurity defenses. Book a free demo with CloudDefense.AI to experience the future of threat detection and response.

The post What is Threat Detection and Response (TDR)? appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/05/29/what-is-threat-detection-and-response-tdr/feed/ 0
Product-Led vs Sales-Led: What’s the Difference, and Which One Is Right for You? https://prodsens.live/2024/05/29/product-led-vs-sales-led-whats-the-difference-and-which-one-is-right-for-you/?utm_source=rss&utm_medium=rss&utm_campaign=product-led-vs-sales-led-whats-the-difference-and-which-one-is-right-for-you https://prodsens.live/2024/05/29/product-led-vs-sales-led-whats-the-difference-and-which-one-is-right-for-you/#respond Wed, 29 May 2024 14:20:27 +0000 https://prodsens.live/2024/05/29/product-led-vs-sales-led-whats-the-difference-and-which-one-is-right-for-you/ product-led-vs-sales-led:-what’s-the-difference,-and-which-one-is-right-for-you?

Product-led vs sales-led growth? This is a decision you and your product team will have to make at…

The post Product-Led vs Sales-Led: What’s the Difference, and Which One Is Right for You? appeared first on ProdSens.live.

]]>
product-led-vs-sales-led:-what’s-the-difference,-and-which-one-is-right-for-you?

Product-led vs sales-led growth? This is a decision you and your product team will have to make at some point in your go-to-market strategy to drive growth.

Choosing the right growth strategy is important for all SaaS companies because it plays a role in how successful your company will be. It lets you shape your product, customer experiences, and sales process in the very best way.

If you’re keen to delve deeper into these strategies, don’t miss the talk by Ben Williams at the Product Drive Conference. The talk explores the unique strengths of both approaches. Ben will reveal how you can harness the power of product-led sales to bridge the gap between PLG and SLG, unlocking a whole new level of growth for your business. Register now to secure your spot for FREE!

A Talk By Ben Williams (Advisor, PLGeek) on The beautiful synergy of product-led and sales-led growth.

In this blog post, you’ll learn an honest opinion about both growth strategies. As well as how to pick the right one for your business and implement it too.

Let’s get started!

TL;DR

  • A product-led growth involves using your product to drive business and revenue growth.
  • Sales-led growth, on the other hand, uses your sales processes to move customers along the funnel.
  • The major difference between both strategies is in the business processes.
  • While product-led growth offers customers a self-service model to experience and learn the product for themselves, sales-led companies provide 1-on-1 assistance to guide all sales-qualified leads through every stage of their journey.
  • They are, however, both relevant growth models with varying benefits.
  • Product-led companies, for example, use free trials and a freemium model to get customers to experience their perceived value.
  • This widens your top-of-the-funnel acquisition and results in product qualified leads who fall in love with your product early, and are more likely to convert.
  • Sales-led however, is a great choice for software companies that are more complex and target enterprise organizations.
  • Customer success under a product-led growth approach depends on your product experience which must be as contextual and engaging as possible.
  • The lack of human guidance in this model means your product must be able to convert customers easily.
  • This is where segmenting all users based on characteristics or job titles comes in. With this, you can create a personalized onboarding flow relevant to each user’s journey.
  • Onboarding checklists can also serve as a guide to help users reach the activation stage immediately.
  • Collecting feedback from users including those in the trial stage is important too. This gives you insight into areas of your product flow or the customer experience that need to be worked on to increase adoption and conversions.
  • With a tool like Userpilot, you can easily build these contextual experiences into your product to increase user engagement and conversions.

Drive Product-Led Growth with Userpilot!

What is product-led growth in SaaS?

Product-led growth is a growth strategy that centers your product as the driving force for customer acquisition, retention, and monetization. This involves using your product experience to move customers further down the growth funnel.

In this case, product-led sales driven by your product team and product experiences are majorly responsible for how well or not your company will do.

What is sales-led growth in SaaS?

Sales-led growth is a strategy that uses a sales rep or your sales team and their processes to drive conversions and business growth. In a sales-led company, the sales team is the driving force for acquisition, retention, and other stages of the customer funnel.

This follows the traditional marketing funnel where leads are qualified into marketing-qualified leads and then sales-qualified leads. These sales-qualified leads are then passed to your sales team to nurture and convert.

Product-led vs sales-led: What’s the difference?

Now you know both approaches have different driving forces behind them, here are some other major differences.

The product-led business model encourages users to learn the product by themselves to achieve their goals.

Using a mix of contextual product experiences and access to resource guides, users in a product-led company enjoy a self-service experience. This is unlike the sales-led approach where customers are assigned sales reps who guide them through every stage of their journey.

The reduced presence of human assistance in product-led makes it easier for customers to drop out of your sales funnel if they don’t get immediate value.

So the success of this model lies in your product team to create experiences that lead customers to their aha moments quickly. Whereas in sales-led, it is the responsibility of your sales teams to ensure customers see the value and convert.

Lastly, while the customer journey under product-led starts with signing up for free trials or freemium accounts, sales-led usually starts with customers requesting personalized demos.

This is indicative of their different sales cycles and also the types of companies they target.

Where product-led growth focuses on small businesses and has a shorter sales cycle, sales-led is used for more complex products with longer sales periods.

Product-led vs sales-led: Which one is right for you?

To determine which growth model is best for your SaaS company, there are several factors you must consider. These include your target audience, pricing model, and ease of use of your product.

For those who are interested in actionable insights from someone who has been in the trenches of both PLG and SLG, Ben Williams’ talk at the Product Drive Conference is a must-attend. Register here for FREE!

A Talk By Ben Williams (Advisor, PLGeek) on The beautiful synergy of product-led and sales-led growth.

If your product targets small businesses and is one that anyone can easily learn to use, a product-led approach is a better choice for you.

Sales-led on the other hand works best when you have a complex product targeting larger companies. These companies may not have the time to sign up for several free trials to learn how your product works. So sales reps and account executives guarantee customer success by guiding and helping them through the buying process and also with installation (if required).

Another way to determine which growth model is a better option is by looking at your price model.

Suppose you practice a freemium model that allows potential customers to test your product for free and for a limited amount of time. In that case, a product-led system that encourages them to become paying customers is a good option.

If on the other hand, your product’s pricing is more complex and dependent on several factors, the sales-led strategy may be better for you.

What are the benefits of product-led growth?

Several popular SaaS businesses like Dropbox and Slack use a product-led growth model for their business. What benefits do these product-led companies enjoy from this strategy?

Here are some:

Wider top of the funnel (TOFU)

Product-led strategies typically allow customers to experience your product through a free trial or freemium package. They use landing page CTAs that ask visitors to start a free trial. This attracts a large number of prospects to your product, widening your top-of-the-funnel acquisition rate.

This means you are more likely to easily attract and onboard more customers than any of the other SaaS growth strategies because you offer the least risk and most benefit to them.

But a high acquisition rate doesn’t mean high retention as your product needs to be engaging enough to keep a large chunk of these sign-ups loyal to you.

Lower customer acquisition cost (CAC)

Since product-led companies practice a freemium or free trial model, their customer acquisition costs are relatively lower than most marketing-led companies. This is because the cost of using different marketing channels to attract, nurture, and convert users keep increasing each year. Meanwhile, these channels have become saturated and competitive space, plus customers claim to not like being sold to.

In product-led companies, the freemium pricing model reduces the customer acquisition cost by serving as an acquisition strategy. Allowing customers to try out your product for free works better in converting them to users. It’s also relatively cheaper than having your marketing team experiment with and track several marketing strategies.

Product led vs sales led customer acquisition costs
Source: Profitwell

Higher retention rate

Giving customers easy access to experience your product helps them quickly discover your value. Once they see your product as a valuable solution, the chance of customer retention becomes higher. Especially when compared with only telling them about your product’s value without giving them an experience.

Also, because your product is the driving force behind growth in this model, your focus will constantly be on ways to make it better. This focus on building the best product and experience for your users increases their retention rates.

What are the benefits of sales-led growth?

Sales-led growth also has numerous benefits which makes it a good choice for some companies. Some of these benefits include:

Less friction during onboarding

In sales-led companies, customers have a sales representative or member of your sales team to walk them through your product. These reps tell them what your product is about and show them how to use it.

Unlike in product-led where customers go through the onboarding process following product cues, the sales-led model provides human guidance. This eliminates any friction or churning chances users may face in-product. It makes user onboarding smoother and increases the chances of adopting your product.

Target enterprise organizations

A sales-led approach lets you target enterprise-level organizations.

These types of companies usually employ software that is more advanced and requires expertise to implement. So if your product fits this description, using a sales-led model could help you better reach and convert such enterprise companies.

Collect insightful customer feedback

Customer feedback is important to improve your sales processes and product experience. When using the sales-led approach, the feedback you collect can be more insightful as you are doing it face to face usually.

This is something that could help you optimize sales strategies so you can retain existing customers and also attract more customers.

How to use the product-led approach to convert more users?

The success of the product-led growth approach lies in great product experiences that help users find immediate and consistent value.

So how can you do this?

Here are some key experiences you can build in your product to increase the trial to paid conversion rates.

Segment users with welcome screens and provide personalized onboarding

You likely have different types of customers who use your product for different things.

Giving every one of them the same product tour/experience will only result in poor product engagement and conversions. This is why good product managers use customer segmentation.

Segmentation is a very powerful tool in product marketing that helps you provide the best and most relevant experiences to your users. This makes the onboarding process more personalized for each customer’s use case. You can group customers based on their jobs to be done or behavioral patterns so they quickly discover your product’s perceived value.

Use a welcome screen and add a micro survey on it, asking users what’s their main goal with the product.

Then use the data to provide a personalized onboarding path for each, guiding them to engage with specific parts of your product that are relevant to their use case.

Use interactive walkthroughs instead of long product tours

When learning how to drive, your driving instructor doesn’t just show you the car parts.

You also have to get behind the wheel and drive. That’s the difference between product tours and interactive walkthroughs.

Product tours simply show your users around your product, which is fine.

But interactive walkthroughs show them how to use these different parts of your product, which is better. Getting users to practically learn about your product while experiencing it increases their chances of reaching their aha moments and achieving feature adoption, quickly.

Product-led interactive walkthrough
Build interactive walkthroughs with Userpilot

Use checklists to drive users to the activation point

Checklists are a necessary tool used to help customers reach the activation stage faster.

Using a tool like Userpilot, you can create onboarding checklists that guide customers on how to use your product to get instant value.

These checklists show them a list of important tasks they must perform to get the actual value. It serves as a helpful guide to new customers instead of leaving them to wander the product themselves.

Use onboarding checklists to move customers to the point of value
Source: Rocketbots

Ask for feedback during the trial period

Customer feedback is important to help you improve your product usage and conversion rates. Collecting feedback from every user who enters your sales funnel especially in the trial phase could help you optimize your conversion process.

With Userpilot, you can build these micro surveys into your product.

Don’t wait for the trial to be over. Remind users they are running out of time and ask them a simple question:

What’s stopping you from upgrading your account today?

Give them a few options, based on the most common reasons users churn or abandon a trial. Then automate responses to each, offering solutions that could increase the trial to paid conversion rate.

Maybe offering a trial extension could really come in handy to some users.

Conclusion

A sure sign of the product-led era is the number of popular SaaS businesses currently using this model to grow their businesses. Regardless of its popularity, product-led growth doesn’t exist to replace sales-led or marketing-led strategies.

Each strategy has its place and benefits, you only need to decide on which works best for your type of audience and product. You may also choose to adopt a product-driven and sales-assisted strategy like Userpilot.

So as you ponder the merits of PLG vs SLG for your own venture, why not get some actionable insights from the pros? Ben Williams will be discussing just that at the Product Drive Conference.

Want to build product experiences code-free? Book a demo call with our team and get started!

Scale Product-led Growth with Userpilot!

The post Product-Led vs Sales-Led: What’s the Difference, and Which One Is Right for You? appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/05/29/product-led-vs-sales-led-whats-the-difference-and-which-one-is-right-for-you/feed/ 0
Format strings in OCaml https://prodsens.live/2024/02/25/format-strings-in-ocaml/?utm_source=rss&utm_medium=rss&utm_campaign=format-strings-in-ocaml https://prodsens.live/2024/02/25/format-strings-in-ocaml/#respond Sun, 25 Feb 2024 06:21:05 +0000 https://prodsens.live/2024/02/25/format-strings-in-ocaml/ format-strings-in-ocaml

OCAML doesn’t have string interpolation, but it does have C-style format strings (but type-safe). Here’s an example: let…

The post Format strings in OCaml appeared first on ProdSens.live.

]]>
format-strings-in-ocaml

OCAML doesn’t have string interpolation, but it does have C-style format strings (but type-safe). Here’s an example:

let hello name = Printf.printf "Hello, %s!n" name
(* Can be written as: let hello = Printf.printf "Hello, %s!" *)

This is type-safe in an almost magical way (example REPL session):

# hello 1;;
Error: This expression has type int but an expression was expected of type
         string

It can however be a little tricky to wrap your head around:

# let bob = "Bob";;
val bob : string = "Bob"

# Printf.printf bob;;
Error: This expression has type string but an expression was expected of type
         ('a, out_channel, unit) format =
           ('a, out_channel, unit, unit, unit, unit) format6

This error is saying that the printf function wants a ‘format string’, which is distinct from a regular string:

# let bob = format_of_string "bob";;
val bob : ('_weak1, '_weak2, '_weak3, '_weak4, '_weak4, '_weak1) format6 =
  CamlinternalFormatBasics.Format
   (CamlinternalFormatBasics.String_literal ("bob",
     CamlinternalFormatBasics.End_of_format),
   "bob")

# Printf.printf bob;;
bob- : unit = ()

OCaml distinguishes between regular strings and format strings. The latter are complex structures which encode type information inside them. They are parsed and turned into these structures either when the compiler sees a string literal and’realizes’ that a format string is expected, or when you (the programmer) explicitly asks for the conversion. Another example:

# let fmt = "Hello, %s!n" ^^ "";;
val fmt :
  (string -> '_weak5, '_weak6, '_weak7, '_weak8, '_weak8, '_weak5) format6 =
  CamlinternalFormatBasics.Format
   (CamlinternalFormatBasics.String_literal ("Hello, ",
     CamlinternalFormatBasics.String (CamlinternalFormatBasics.No_padding,
      CamlinternalFormatBasics.String_literal ("!n",
       CamlinternalFormatBasics.End_of_format))),
   "Hello, %s!n%,")

# Printf.printf fmt "Bob";;
Hello, Bob!
- : unit = ()

The ^^ operator is the format string concatenation operator. Think of it as a more powerful version of the string concatenation operator, ^. It can concatenate either format strings that have already been bound to a name, or string literals which it interprets as format strings:

# bob ^^ bob;;
- : (unit, out_channel, unit, unit, unit, unit) format6 =
CamlinternalFormatBasics.Format
 (CamlinternalFormatBasics.String_literal ("bob",
   CamlinternalFormatBasics.String_literal ("bob",
    CamlinternalFormatBasics.End_of_format)),
 "bob%,bob")

# bob ^^ "!";;
- : (unit, out_channel, unit, unit, unit, unit) format6 =
CamlinternalFormatBasics.Format
 (CamlinternalFormatBasics.String_literal ("bob",
   CamlinternalFormatBasics.Char_literal ('!',
    CamlinternalFormatBasics.End_of_format)),
 "bob%,!")

Custom formatting functions

The really amazing thing about format strings is that you can define your own functions which use them to output formatted text. For example:

# let shout fmt = Printf.ksprintf (fun s -> s ^ "!") fmt;;
val shout : ('a, unit, string, string) format4 -> 'a = 

# shout "hello";;
- : string = "hello!"

# let jim = "Jim";;
val jim : string = "Jim"

# shout "Hello, %s" jim;;
- : string = "Hello, Jim!"

This is really just a simple example; you actually are not restricted to outputting only strings from ksprintf. You can output any data structure you like. Think of ksprintf as ‘(k)ontinuation-based sprintf’; in other words, it takes a format string (fmt), any arguments needed by the format string (eg jim), builds the output string, then passes it to the continuation that you provide (fun s -> ...), in which you can build any value you want. This value will be the final output value of the function call.

Again, this is just as type-safe as the basic printf function:

# shout "Hello, jim" jim;;
Error: This expression has type
         ('a -> 'b, unit, string, string, string, 'a -> 'b)
         CamlinternalFormatBasics.fmt
       but an expression was expected of type
         ('a -> 'b, unit, string, string, string, string)
         CamlinternalFormatBasics.fmt
       Type 'a -> 'b is not compatible with type string

This error message looks a bit scary, but the real clue here is in the last line: an extra string argument was passed in, but it was expecting 'a -> 'b. Unfortunately the type error here is not that great because of how powerful and general this function is. Because it could potentially accept any number of arguments depending on the format string, its type is expressed in a very general way. This is a drawback of format strings to watch out for. But once you are familiar with it, it’s typically not a big problem. You just need to match up the conversion specifications like % with the actual arguments passed in after the format string.

You might have noticed that the function is defined with let shout fmt = .... It doesn’t look like it could accept ‘any number of arguments’. The trick here is that in OCaml, every function accepts only a single argument and returns either a final non-function value, or a new function. In the case of functions which use format strings, it depends on the conversion specifications, so the formal definition shout fmt could potentially turn into a call like shout "%s bought %d apples today" bob num_apples. As a shortcut, you can think of the format string fmt as a variadic argument which can potentially turn into any number of arguments at the callsite.

More reading

You can read more about OCaml’s format strings functionality in the documentation for the Printf and Format modules. There is also a gentle guide to formatting text, something OCaml has fairly advanced support for because it turns out to be a pretty common requirement to print out the values of various things at runtime.

On that note, I have also written more about defining custom formatted printers for any value right here on dev.to. Enjoy 🐫

The post Format strings in OCaml appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/02/25/format-strings-in-ocaml/feed/ 0
A Guide to Atomic Design with React Components https://prodsens.live/2023/09/03/a-guide-to-atomic-design-with-react-components/?utm_source=rss&utm_medium=rss&utm_campaign=a-guide-to-atomic-design-with-react-components https://prodsens.live/2023/09/03/a-guide-to-atomic-design-with-react-components/#respond Sun, 03 Sep 2023 05:25:04 +0000 https://prodsens.live/2023/09/03/a-guide-to-atomic-design-with-react-components/ a-guide-to-atomic-design-with-react-components

React has quickly become one of the most popular open-source JavaScript frameworks for building user interfaces. Its component-based…

The post A Guide to Atomic Design with React Components appeared first on ProdSens.live.

]]>
a-guide-to-atomic-design-with-react-components

React has quickly become one of the most popular open-source JavaScript frameworks for building user interfaces. Its component-based architecture provides a powerful way to break complex user interfaces into small, reusable pieces.

React allows developers to create highly modular and scalable component libraries for constructing UIs when combined with atomic design methodology.

This guide will explore effectively implementing atomic design principles in React apps.

What is Atomic Design?

Atomic design is a methodology for creating design systems developed by Brad Frost. It provides a way to break down interfaces into basic building blocks called “atoms.”

These simple UI atoms can then be combined into more complex reusable components.

The atomic design methodology splits components into five distinct levels:

Atoms

The foundational building blocks that can’t be broken down further include buttons, form inputs, typography, and color palettes.

Molecules

Groups of UI atoms joined together to form simple composite components. For example, a search form comprises an input box, button, and labels.

Organisms

More complex components comprise groups of molecules and/or atoms like headers, footers, lists, or menus.

Templates

Page layouts showing structured content and interface regions like sidebars, headers/footers, etc.

Pages

Fully designed and populated page instances drawing on components from all lower levels.

By breaking down interfaces into atomic components in this way, developers can more easily construct consistent, reusable UI parts that can be combined to build complex applications.

Atomic Design Used Case

Here are some ways that atomic design with React components can be used in web development:

  • Building component libraries – Atomic design provides a great way to structure reusable component libraries that can be shared across projects. Each atom, molecule, and organism becomes a reusable module.
  • Design systems – By codifying UI components into a consistent atomic library, reusable design systems can be developed and extended. This helps maintain branded design consistency across sites/apps.
  • Large web apps – For big web applications like dashboards and SaaS products, atomic design helps break complex UIs into manageable pieces that can be developed independently.
  • Team collaboration – Atomic components are easier for large teams to work with since they are decoupled. Developers can build new features by composing components without causing conflicts.
  • Faster prototyping – Creating an inventory of reusable components speeds up prototyping and building new views. New pages can be assembled from existing UI building blocks.
  • Improved accessibility – Making accessibility a priority for lower-level atoms and molecules causes it to propagate through all UI components by default.
  • Code maintenance – Atomic components isolate complexity into small pieces. This makes it easier to update, extend, and maintain code over time as requirements change.
  • Cross-platform UI – Components built using atomic design principles and React can be rendered across web, mobile, and even native platforms for cross-platform apps.

The scalability and abstraction of atomic design make it a great approach as web applications grow in size and complexity. React’s component model really shines when combined with atomic design principles for assembling complex UIs from simple building blocks.

Why Atomic Design Works Well With React

Several key aspects of React make it a natural fit for implementing atomic design:

Component Architecture

React is designed around building UIs through small, encapsulated components. This aligns directly with the ethos of atomic design. React components provide the natural mechanisms for implementing atomic design levels.

Reusability Focus

A core goal of atomic design is developing reusable interface components. React components are already designed to promote reusability through properties like composability and isolation.

Abstract Component Structure

React functional and class components have a simple, abstract structure well-suited for implementing atomic components with any UI rendering technology.

Design System Scalability

Atomic design allows extensive component libraries and design systems to be scaled effectively. React makes it easier to build complex UIs from simple building blocks.

Mature Ecosystem of Tools

The React ecosystem provides many libraries that complement implementing atomic design, like Storybook for component demos and Styled Components for consistent styling.

React provides the ideal framework for putting atomic design principles into practice. Next, let’s look at effectively implementing atomic design in a React app.

Implementing Atomic Design in React

While no strict rules are defined for applying atomic design, these steps provide a solid starting point:

Break the UI into Distinct Sections

Identify the distinct regions and component groups comprising the app’s user interface. This helps define initial component types and hierarchy.

Map Components to Atomic Design Levels

Determine which components are atoms, molecules, and organisms based on complexity, reusability, and composition. Keep components encapsulated and focused.

Organize Component Files/Folders

Structure files and folders to group related components. For example atoms/Button, molecules/SearchForm. Make it easy to navigate the component library.

Build Up Complex Components

Use simple atomic components to compose higher-level molecules and organisms. Create complex UIs through composition, not just added complexity in standalone components.

Focus on Reusability

Build components to be as reusable as possible between sections of the app. Extract any duplicated component logic into reusable atoms/molecules.

Make Use of Helper Libraries

Use tools like Storybook, Styled Components, and Radium to build a consistent atomic component library and showcase isolated components.

While thinking “atoms first” helps guide the process, atomic design is iterative. As components evolve, re-evaluating their place in the atomic hierarchy helps keep the library coherent as it expands.

Atomic React Component Example

Let’s look at a React component example to see atomic design in practice. We’ll implement a common SearchForm component using lower-level atoms and molecules.

First, we create some basic input and button React atom components:

// Input atom
const Input = ({placeholder}) => (
    
);

// Button atom 
const Button = ({label}) => (
  
);

Next, we use those atoms to build a SearchForm molecule:

// SearchForm molecule
const SearchForm = () => (
  
         
);

Our higher-level SearchForm molecule leverages the simpler Input and Button atoms to compose a reusable search component. We can render this molecule anywhere in our app that needs search functionality.

This example shows the power of atomic design. Combining reusable atoms into molecules allows us to build complex UIs without repeated logic quickly.

Benefits of Atomic Design with React

Implementing atomic design principles with React components provides many benefits:

Reusable UI Building Blocks

Breaking UIs into atomic components creates reusable pieces that become intuitive “Lego blocks” for building complete interfaces.

Consistent Design Systems

The atomic design promotes design consistency across applications. Components stay decoupled from specific contexts.

Easier to Develop Complex UIs

Abstracting complex components into simpler atoms reduces cognitive overhead. Features can be added by composing new molecules from existing atoms.

Promotes Team Collaboration

Atomic components are easier for teams to understand. Developers can work independently on atoms/molecules and not create conflicts.

Scalable Component Libraries

Atomic design enables UI libraries to scale across large codebases. Lower-level components can be combined in many different ways.

Improved Developer Experience

Composing UIs from pre-built components speeds development and reduces bugs. Creating the components promotes learning.

Efficient Redesigns

Changing lower-level atoms/molecules automatically apply changes across an entire UI. Redesigns involve minimal effort.

React and atomic design work perfectly together to create reusable component libraries that make building, maintaining, and scaling complex applications much more manageable.

Final Thoughts on Atomic Design with React

Implementing atomic design principles on top of React’s component architecture provides a powerful way to organize and build complex user interfaces. Smaller atomic components naturally produce more reusable, adaptable, and scalable UI libraries.

However, care should be taken not to enforce atomic design rules dogmatically. Finding the right levels of abstraction for components takes some iterative analysis and refactoring. Flexibility is important as projects evolve.

The ultimate goal is to create clean component APIs that minimize complexity for developers using the library. Atomic design provides excellent guidance but needs some pragmatic interpretation based on context.

React combined with an atomic design approach to component architecture results in highly modular frontends. Components can be mixed and matched in many combinations to build consistent, functional UIs faster with fewer bugs.

Frequently Asked Questions About Atomic Design in React

What are some examples of React UI atoms?

Some common examples include buttons, form inputs, labels, images, icons, and color palettes. These simple elements form the building blocks for more complex components.

How can I organize my atomic React components?

Group components into folders by type like /atoms, /molecules, /organisms. Tools like Storybook and Bit can help create a browsable component library.

When should I break components down further into sub-components?

If a component becomes too complex, hard to reuse, or contains duplicated logic, it likely needs to be broken down into smaller atomic parts.

How do I make reusable React components with atomic design?

Focus on creating generic, abstracted component APIs vs. tailored functionality. Don’t couple components to specific use cases. Allow composition through props.

Can design systems be built using atomic React components?

Yes, atomic design principles are essential for building reusable design systems. React helps create versatile component libraries that can scale across projects.

Conclusion

Atomic design provides a proven methodology for creating modular, scalable component libraries. React’s component architecture helps naturally extend atomic design principles into interface development.

Together, they allow developers to construct complex, consistent UIs from simple building blocks efficiently. Component reusability and encapsulation keep growing codebases easy to maintain.

While some care is required to plan component architecture properly, the atomic design complements React’s strengths for building large-scale applications powered by composable user interface components.

If you find this post exciting, find more exciting posts on Learnhub Blog; we write everything tech from Cloud computing to Frontend Dev, Cybersecurity, AI, and Blockchain.

Resource

The post A Guide to Atomic Design with React Components appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/03/a-guide-to-atomic-design-with-react-components/feed/ 0
Streamlining the User Experience: Enhancing the Admin Panel for Seamless Management https://prodsens.live/2023/07/19/streamlining-the-user-experience-enhancing-the-admin-panel-for-seamless-management/?utm_source=rss&utm_medium=rss&utm_campaign=streamlining-the-user-experience-enhancing-the-admin-panel-for-seamless-management https://prodsens.live/2023/07/19/streamlining-the-user-experience-enhancing-the-admin-panel-for-seamless-management/#respond Wed, 19 Jul 2023 17:25:23 +0000 https://prodsens.live/2023/07/19/streamlining-the-user-experience-enhancing-the-admin-panel-for-seamless-management/ streamlining-the-user-experience:-enhancing-the-admin-panel-for-seamless-management

What I built I built a Refine Admin Panel that serves as a comprehensive tool for managing and…

The post Streamlining the User Experience: Enhancing the Admin Panel for Seamless Management appeared first on ProdSens.live.

]]>
streamlining-the-user-experience:-enhancing-the-admin-panel-for-seamless-management

What I built

I built a Refine Admin Panel that serves as a comprehensive tool for managing and organizing data. It provides various features and functionality to streamline the data management process and increase productivity.

Category Submission:

Project built using Supabase as the main data provider for the refine app and Project built using Material UI.

https://warm-custard-2be6df.netlify.app/

Screenshots

Image description

Image description

Description

Refine Admin Panel is a user-friendly web application that provides an intuitive interface for managing and refining data. It is designed to simplify complex data management tasks and increase efficiency. Key features of the Refine Admin Panel include:

Data Import/Export: Easily import and export data in various formats like CSV, Excel, JSON.

Data Visualization: Generate visual representations of data using charts, graphs and other visualization techniques to facilitate understanding and decision making.

Data Filtering and Sorting: Filter and sort data based on specific criteria to locate and analyze information more effectively.

Bulk Operations: Perform bulk operations on data, such as editing multiple records at once, deleting selected data, or applying changes to a subset of data.

User Management: Manage user accounts, access levels and permissions to ensure secure and controlled access to the admin panel.

Customization: Customize the look and layout of the admin panel to suit your preferences and branding needs.

Automation and Integration: Integrate the refined admin panel with other systems or workflows using APIs or webhooks to automate data processing tasks and streamline operations.

https://github.com/abhixsh/Blog-logging-panel-refine

Permissive License

Background (What made you decide to build this particular app? What inspired you?)

This was done because I was eager to learn about curd operation.

How I built it (How did you utilize refine? Did you learn something new along the way? Pick up a new skill?)

Learned a lot about Refine. Refine, also known as OpenRefine, is a powerful open-source tool for data cleaning, transformation, and exploration. It provides a user-friendly interface and a range of features that help users improve the quality and consistency of their data.

Additional Resources/Info

refine tutorial for building a complete CRUD app.
https://refine.dev/docs/tutorial/introduction/index/
refine official documentation
https://refine.dev/docs/

The post Streamlining the User Experience: Enhancing the Admin Panel for Seamless Management appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/07/19/streamlining-the-user-experience-enhancing-the-admin-panel-for-seamless-management/feed/ 0
Web3: The Era of Decentralization 🌐 https://prodsens.live/2023/07/02/web3-the-era-of-decentralization-%f0%9f%8c%90/?utm_source=rss&utm_medium=rss&utm_campaign=web3-the-era-of-decentralization-%25f0%259f%258c%2590 https://prodsens.live/2023/07/02/web3-the-era-of-decentralization-%f0%9f%8c%90/#respond Sun, 02 Jul 2023 15:25:42 +0000 https://prodsens.live/2023/07/02/web3-the-era-of-decentralization-%f0%9f%8c%90/ web3:-the-era-of-decentralization-

Introduction 📜 The internet has evolved significantly over the years, transforming the way we interact, communicate, and conduct…

The post Web3: The Era of Decentralization 🌐 appeared first on ProdSens.live.

]]>
web3:-the-era-of-decentralization-

Introduction 📜

The internet has evolved significantly over the years, transforming the way we interact, communicate, and conduct business. As we enter a new era, the concept of Web3 has emerged, promising a decentralized and more inclusive internet experience. In this article, we will explore the fundamental principles of Web3, its potential impact on various industries, and the benefits it offers to individuals and businesses alike.

Table of Contents 📑

  1. Understanding Web3
  2. Key Features of Web3
  3. Decentralized Finance (DeFi)
  4. Non-Fungible Tokens (NFTs)
  5. Web3 and Data Privacy
  6. Web3 and Cybersecurity
  7. Web3 and Supply Chain Management
  8. Web3 and Digital Identity
  9. Web3 and Governance
  10. Web3 and Social Media
  11. Web3 and Healthcare
  12. Web3 and Education
  13. Web3 and Energy Sector
  14. Challenges and Limitations of Web3
  15. Conclusion

1. Understanding Web3 🌐

Web3 refers to the next generation of the internet that aims to address some of the limitations of Web2, the current version. Web3 is built on decentralized technologies, such as blockchain and peer-to-peer networks, which enable trustless interactions and remove the need for intermediaries. It empowers individuals by giving them more control over their data and digital assets.

2. Key Features of Web3 🚀

Web3 encompasses several key features that distinguish it from its predecessors:

Decentralization 🌍

Web3 is characterized by decentralization, where power and control are distributed across a network of participants. This ensures that no single entity has control over the entire network, making it more resilient and censorship-resistant.

Transparency and Immutability 🔒

Blockchain technology, a core component of Web3, provides transparency and immutability. Transactions and data recorded on the blockchain are visible to all participants and cannot be altered retroactively, enhancing trust and accountability.

Interoperability 🤝

Web3 promotes interoperability by enabling seamless interaction between different platforms and applications. Smart contracts, self-executing contracts based on predefined rules, facilitate automated and secure transactions across various blockchain networks.

User Empowerment 💪

Web3 empowers users by enabling them to own and control their digital identities, data, and assets. Through cryptographic keys and decentralized storage systems, individuals have the ability to manage their online presence and engage in peer-to-peer transactions directly.

3. Decentralized Finance (DeFi) 💸

One of the most notable applications of Web3 is decentralized finance, or DeFi. DeFi aims to transform traditional financial systems by eliminating intermediaries and enabling direct peer-to-peer transactions. With Web3, individuals can access a wide range of financial services, such as lending, borrowing, and asset trading, without relying on centralized institutions.

4. Non-Fungible Tokens (NFTs) 🎨

NFTs have gained significant popularity in the Web3 era. These unique digital assets, represented as tokens on the blockchain, enable the ownership and trading of digital art, collectibles, and other digital assets. NFTs provide artists and creators with new ways to monetize their work and establish verifiable ownership.

5. Web3 and Data Privacy 🔐

Web3 emphasizes

the importance of data privacy and gives individuals greater control over their personal information. Instead of relying on centralized entities to store and manage data, Web3 leverages decentralized storage and encryption techniques, ensuring that users’ data remains secure and private.

6. Web3 and Cybersecurity 🛡

By leveraging blockchain technology, Web3 enhances cybersecurity. The decentralized nature of Web3 makes it more difficult for hackers to compromise the network, as there is no single point of failure. Additionally, cryptographic techniques used in Web3 provide secure and tamper-proof transactions.

7. Web3 and Supply Chain Management 🚚

Web3 revolutionizes supply chain management by introducing transparency and traceability. Through blockchain-based systems, stakeholders can track the entire lifecycle of a product, ensuring authenticity, reducing counterfeiting, and promoting ethical practices.

8. Web3 and Digital Identity 🆔

Web3 enables individuals to have full control over their digital identities. Instead of relying on centralized identity management systems, Web3 leverages decentralized identity protocols. This empowers users to manage and verify their identities without the need for intermediaries.

9. Web3 and Governance 🗳

Web3 introduces new models of governance that are more democratic and transparent. Decentralized autonomous organizations (DAOs) enable collective decision-making through smart contracts, allowing participants to vote on proposals and influence the direction of a project or community.

10. Web3 and Social Media 👥

Web3 disrupts the traditional social media landscape by enabling decentralized social networks. These networks prioritize user privacy, provide rewards for content creation, and eliminate centralized control over user data, fostering a more democratic and user-centric environment.

11. Web3 and Healthcare 🏥

Web3 has the potential to revolutionize the healthcare industry by improving data interoperability, patient privacy, and medical research. Blockchain-based health records ensure secure and efficient data sharing among healthcare providers while maintaining patient confidentiality.

12. Web3 and Education 🎓

In the realm of education, Web3 offers new possibilities for lifelong learning, credential verification, and personalized education. Blockchain-based platforms can facilitate the issuance and verification of digital credentials, making educational achievements more accessible and trustworthy.

13. Web3 and Energy Sector ⚡

Web3 can drive innovation in the energy sector by enabling peer-to-peer energy trading, transparent carbon accounting, and decentralized energy grids. Through smart contracts and blockchain-based systems, energy transactions can be securely recorded and verified, promoting sustainability and renewable energy adoption.

14. Challenges and Limitations of Web3 ❗

While Web3 holds great promise, it also faces several challenges and limitations. These include scalability issues, regulatory uncertainties, and user adoption barriers. Overcoming these obstacles will be crucial for the widespread adoption and success of Web3.

Conclusion 🎉

Web3 represents the dawn of a new era in the internet’s evolution, emphasizing decentralization, user empowerment, and trustless interactions. Its potential impact spans across various industries, including finance, art, supply chain, healthcare, and more. As we embrace the era of decentralization, Web3 has the power to reshape our digital lives and unlock new opportunities for individuals and businesses.

Connect with me 🙍

The post Web3: The Era of Decentralization 🌐 appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/07/02/web3-the-era-of-decentralization-%f0%9f%8c%90/feed/ 0
Time Tracking: Should We or Shouldn’t We? https://prodsens.live/2023/05/30/to-timetrack-or-not-2/?utm_source=rss&utm_medium=rss&utm_campaign=to-timetrack-or-not-2 https://prodsens.live/2023/05/30/to-timetrack-or-not-2/#respond Tue, 30 May 2023 15:25:14 +0000 https://prodsens.live/2023/05/30/to-timetrack-or-not-2/ time-tracking:-should-we-or-shouldn’t-we?

Executive Summary What is time tracking in project management? Time Tracking and the time management process tracks how…

The post Time Tracking: Should We or Shouldn’t We? appeared first on ProdSens.live.

]]>
time-tracking:-should-we-or-shouldn’t-we?

Executive Summary

What is time tracking in project management? Time Tracking and the time management process tracks how long project teams take to complete their project phases and tasking. It allows the project manager to determine the team’s productivity and progress on a project. However, many factors should be reviewed thoroughly when considering project time tracking.

Should We or Shouldn’t We?

Time tracking in today’s professional business world may be similar to playing a tense game of Frogger. You know you need to cross the road, but no matter which way you leap, there is a high probability that you will encounter obstacles that will ultimately end your game. So, the question remains: Should we or shouldn’t we require time tracking?

As a certified Resource Manager, I can easily persuade you to land on whatever decision you are leaning towards; for or against it.

If you are leaning against it, I would most likely recommend that you don’t do it. It probably isn’t worth the stress and disruption to your team at this time.

If you are leaning towards it or your business requires billing for services operations, you must determine what type of tracking you will require: Full Day or Time Against.

Before you decide, take a walk with me for a few minutes as I share some of my lessons learned about implementing time tracking.

Full Day Time Tracking

Full Day Time Tracking

Full Day Time Tracking

Full Day time tracking requires a resource to account for 100% of their working hours. This is the more complicated, demanding, and most difficult type of time tracking to successfully implement and get adopted. While several key benefits exist, the drawbacks can have real consequences for the organization and the team environment.

Benefits:

  • Provides a full picture of productive vs. non-productive time/activities
  • Provides resource allocation justification
  • Provides performance accountability
  • Identifies where processes should be improved
  • Provides process improvement KPIs
  • Provides the most detailed level of estimating work effort and forecasting resource capacity

Drawbacks:

  • Can generate largely false data – Resources often arbitrarily enter in 8 hours to meet the requirement. Others may underreport or inflate time to appear that they are either more efficient or more overloaded than they may be.
  • Often creates a negative working environment by generating a feeling of “Big Brother Watching” or being micromanaged.
  • Difficult to get buy-in from the resources and managers. This requires a lot of tough conversations and a strong dedication to taking the most unpopular avenue.
  • If Applied to Global Users – It can create complications in countries that have Work Counsels and/or strict privacy rules
  • It can create legal complications if global, contracted, or non-exempt employees consistently report more than the agreed-upon contracted hours

Before you decide to implement a Full Day time tracking requirement, take a minute to be brutally honest with yourself – is this truly required to achieve what you need?

If you are unsure this is right for your team, don’t do it! If you plan to go down the Full Day time-tracking path, you must be fully committed to dealing with the objections, consequences, and occasionally outright defiance.

If this option worries you, perhaps consider implementing the less daunting time tracking type, Time Against.

Time Against Time Tracking

Time Against Time Tracking

Time Against Time Tracking

Time Against tracking requires a resource to account for the amount of time they spend working on a particular assignment, and the overall daily sum of hours is not as important. While time tracking will never be popular with resources, this is the less demanding, more user-friendly, and easier type of time tracking to successfully implement and get adopted. Following are a few of the benefits and drawbacks of this method.

Benefits:

  • Provides a more accurate picture of actual time spent on individual projects
  • Can identify where processes should be improved
  • Provides resource allocation justification when combined with resource management best practices
  • Provides performance accountability
  • Provides the most accurate level of estimating work effort and forecasting resource capacity
  • Eliminates the “Big Brother Watching” and micromanagement feeling
  • Easier to get buy-in from the resources as they are more likely to understand the request versus harboring feelings of having their privacy invaded.

Drawbacks:

  • Resources need to change their working behaviors in order to make entering time against projects a routine activity
  • There may be times when large blocks of time are unaccounted for, which ultimately should generate a conversation with the resource
  • If Applied to Global Users – It can create complications in countries that have Work Counsels and/or strict privacy rules
  • It can create legal complications if global, contracted, or non-exempt employees consistently report more than the agreed-upon contracted hours

Which way to go?

Between the two-time tracking types, I have historically seen greater implementation success with the Time Against option. Resources are more open to accounting for how much time they spend working on something so long as every minute of their day is not tracked.

The result of this is that you typically have more reliable data that you can use to improve overall performance.

But First, Ask Yourself…

 

Time Tracking Questions You Must Ask Yourself

Time Tracking Questions You Must Ask Yourself

Regardless of which path you choose, there are several other questions that you need to take an open and honest look at:

  • Will time tracking improve your working environment or create more overhead? – This is a difficult question, to be honest about. Our initial inclination is to start listing off all the ways things will improve: accurate effort estimating, resource forecasting, and resource load balancing. All good responses! However, I would challenge you to look at it again, especially after reviewing the next few questions. Time tracking always comes with additional overhead, negative emotions, and change management. It’s worth being honest with yourself if it is truly required.
  • Once you have this information, do you know what to do with it? – Be honest!!! Gathering information is great! Gathering accurate information is even better!! Do you know what to do with it now that you have it? Do you have an established resource management plan that you will be utilizing? Do you have knowledgeable Resource Managers who know how to analyze the data and turn it into something that can be used for executive- and team-level decisions? If you don’t, take some additional time to evaluate your needs.
  • Are you willing to risk changing the work environment by implementing time tracking? – Let’s face it, we hire our resources for their skills, their knowledge, and their experience. Asking them to track their time is often perceived as a slap in the face, even if you have valid reasons for the ask. Time-tracking discussions are very difficult to have! Be aware of the impact that this will have on your team’s morale.
  • Do you have the Leadership’s buy-in and support? – If they are not fully committed to backing up this requirement, it has a very low chance of succeeding. If your Executives are not fully on board, consider implementing another method to gather the data you require.
  • Are you willing to link time tracking to a compliance program?  – This typically impacts resources’ financials or reviews, positively or negatively. If you aren’t willing to take this step, understand that compliance will be extremely low. Why should I bother if it isn’t required, and there are no consequences for being non-compliant?

My Time Tracking Recommendation For You…

Ultimately, many factors should be reviewed thoroughly when considering project time tracking. At the end of the day, here’s my recommendation for you:

  • If you need time reporting, and a minute-by-minute breakdown isn’t absolutely necessary, have resources report time against particular assignments rather than a full day’s accounting.
  • Balance the non-reported time by adjusting their overall availability. Link this initiative to a compliance program, whether you choose to utilize the carrot or the stick – although you will get a much better reaction with carrots! Take the time to explain to your resources exactly why this information is needed and lay out the WIIFM factors.
  • Finally, make sure to follow up with your team after some time has passed to review the results of this effort. Let them not only see but understand what benefits time tracking brings to the organization.

Let’s Continue To Chat…

Contact Kolme Group to learn more about our Project and Change Management Services and what tools will help you achieve your time-tracking goals. 

Contact Us

Be sure to follow us on TwitterLinkedIn, and YouTube, and use #KolmeGroup on shared posts!  

The post Time Tracking: Should We or Shouldn’t We? appeared first on Kolme Group.

The post Time Tracking: Should We or Shouldn’t We? appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/05/30/to-timetrack-or-not-2/feed/ 0
How to Add Estimated Review Time and Context Labels to Pull Requests https://prodsens.live/2023/03/19/how-to-add-estimated-review-time-and-context-labels-to-pull-requests/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-add-estimated-review-time-and-context-labels-to-pull-requests https://prodsens.live/2023/03/19/how-to-add-estimated-review-time-and-context-labels-to-pull-requests/#respond Sun, 19 Mar 2023 22:02:24 +0000 https://prodsens.live/2023/03/19/how-to-add-estimated-review-time-and-context-labels-to-pull-requests/ how-to-add-estimated-review-time-and-context-labels-to-pull-requests

The pull request (PR) review process, if not set up well in your team, can create a lot…

The post How to Add Estimated Review Time and Context Labels to Pull Requests appeared first on ProdSens.live.

]]>
how-to-add-estimated-review-time-and-context-labels-to-pull-requests

The pull request (PR) review process, if not set up well in your team, can create a lot of bottlenecks in getting your code merged into the main branch and into production. By adding more context and information automatically to your PRs, you save yourself and your team work.

Take the scenario of fixing a typo in documentation. If there’s a backlog of PRs that need attention, such a PR may take two days — or longer — just to be approved. This is where continuous merge (CM) with gitStream comes in.

gitStream is a tool that allows you to add context and automation to your PRs, classifying PRs based on their complexity.

This ensures that a review won’t stay in the queue for long as it can be quickly assigned to the right person, immediately approved or have the appropriate action identified easily.

This hands-on article demonstrates how to add gitStream CM to your repository.

In this article, you’ll learn:

  1. How to Configure your Repository
  2. How to Create Pull Requests (PRs)
  3. How to Add the CM Feature to Your PRs

Quick gitStream Setup Guide

If you’re keen to get all the benefits of gitStream and continuous merge right away, all you need to do is follow these simple steps. If you want to understand how gitStream works, how you can customize it and more options, it will follow right after.

  1. Choose Install for free on gitStream’s GitHub marketplace page
  2. Add 2 files to your repo:
a) .cm/gitstream.cm
b) .github/workflows/gitstream.yml
  1. Open a pull request
  2. Set gitStream as a required check

A Comprehensive Guide to gitStream & Continuous Merge

Filter functions and context variables are used to effect automated actions, such as adding labels (add-label@v1), assigning reviewers (add-reviewers@v1), and approving requests (approve@v1), among others.

Everything is included in a .cm configuration file named gitstream.cm.

All instructions to gitStream CM are detailed in the docs found at docs.gitstream.cm. gitStream also uses GitHub Actions to do its work, so you’ll need to add the gitstream.yml file to your GitHub Actions directory at .github/workflows/.

The main components to fulfill gitStream’s CM are:

  • The configuration files: gitstream.cm and gitstream.yml.
  • The filter functions: Code that tries to check and/or select certain data types from the input for checks during a PR creation.
  • The context variables: The inputs fed to the filter functions.
  • The automation actions.

Note: Some steps use Python only for demonstration purposes. It’s not required knowledge.

Prerequisites

To follow this tutorial, ensure you have the following:

  • Hands-on knowledge of Git and GitHub workings. You must know activities such as creating a repository, PRs, commits, and pushes.
  • A GitHub account.
  • Git installed in your working environment.

You can find and review the final project code here.

Step 1 – Set Up gitStream on Your Repo

Create an empty repo and give it a name, then install gitStream to it from the marketplace.

After installation, you can either: 1) Clone the repository to your environment; or 2) Create a folder and point it to the repository. This tutorial uses the second option.

Create a folder called gitStreamDemo. In this folder, create two directories, .github/workflows and .cm, using the commands in a terminal window below:

mkdir -p .github/workflows

mkdir .cm

In the .github/workflows folder, create a file called gitstream.yml and add the following YAML script:

name: gitStream workflow automation

on:

workflow_dispatch:

  inputs:

    client_payload:

      description: The Client payload

      required: true

    full_repository:

      description: the repository name include the owner in `owner/repo_name` format

      required: true

    head_ref:

      description: the head sha

      required: true

    base_ref:

      description: the base ref 

      required: true

    installation_id:

      description: the installation id

      required: false

    resolver_url:

      description: the resolver url to pass results to

      required: true

    resolver_token:

      description: Optional resolver token for resolver service

      required: false

      default: ''

jobs:

  gitStream:

    timeout-minutes: 5

    # uncomment this condition, if you dont want any automation on dependabot PRs

    # if: github.actor != 'dependabot[bot]'

    runs-on: ubuntu-latest

    name: gitStream workflow automation

    steps:

      - name: Evaluate Rules

        uses: linear-b/gitstream-github-action@v1

        id: rules-engine

        with:

          full_repository: ${{ github.event.inputs.full_repository }}

          head_ref: ${{ github.event.inputs.head_ref }}

          base_ref: ${{ github.event.inputs.base_ref }}

          client_payload: ${{ github.event.inputs.client_payload }}

          installation_id: ${{ github.event.inputs.installation_id }}

          resolver_url: ${{ github.event.inputs.resolver_url }}

          resolver_token: ${{ github.event.inputs.resolver_token }}

Next, create a file called gitstream.cm in the .cm folder and add the following code:

manifest:

  version: 1.0

automations:

  show_estimated_time_to_review:

    if:

      - true

    run:

      - action : add-label@v1

      args:

       label: "{{ calc.etr }} min review"

       color: {{ '00ff00' if (calc.etr >= 20) else ('7B3F00' if (calc.etr >= 5) else '0044ff') }}

  safe_changes:

    if:

      - {{ is.doc_formatting or is.doc_update }}

    run:

      - action: add-label@v1

       args:

       label: 'documentation changes: PR approved'

       color: {{'71797e'}}

      - action: approve@v1

  domain_review:

    if:

      - {{ is.domain_change }}

    run:

      - action: add-reviewers@v1

      args:

       reviewers: []

      - action: add-label@v1

      args:

       label: 'domain reviewer assigned'

       color: {{'71797e'}}

  set_default_comment:

    if:

      - true

    run:

      - action: add-comment@v1

      args:

       comment: "Hello there. Thank you for creating a pull request with us. A reviewer will soon get in touch."

calc:

  etr: {{ branch | estimatedReviewTime }}

is:

  domain_change: {{ files | match(regex=r/domain//) | some }}

  doc_formatting: {{ source.diff.files | isFormattingChange }}

  doc_update: {{ files | allDocs }}

In the file, you’ll see the following four automation actions:

  • show_estimated_time_to_review: This automation calculates the estimated time a review to a PR may take.
  • safe_changes: This shows if changes to non-critical components done in a PR are safe, such as document changes. The PR is automatically approved.
  • domain_review: This automation runs to show if a change was made to the domain layer.
  • set_default_comment: This is fired every time a PR is opened and raises an acknowledgment comment to the user that a PR has been created.

At the end of the document, there’s a section containing filter functions for the automation actions. The actions are run after certain conditions specified in the filter functions or keys are met.

Step 2 – Calculating the Time to Review

In the first automation, check the value of the etr variable and decide which label to assign to the PR. For more information on how ETR is calculated, check out this blog.

Create a file called main.py in the root of your folder. Then, create three folders using the command below:

mkdir views domain data

Add the following to the main.py file:

def show_message(name1, name2):

  print(f'Hello, {name}. Welcome to the gitStream world')

if __name__ == '__main__':

  print_hi('Mike')

Copy the main.py file as is and paste it to the other three folders. Rename them to match the folders’ names (domain.py) for the domain folder.

For the dummy documentation file, create a README.md file in the root of your folder and add the following markdown script.

# gitStreamDemo

A demo showing how to set up gitStream on your first repo

Now, run these commands to initialize the repository, stage the files for committing, and make a commit, in that order:

git init

git add .

git commit -am “initialization”

Next, point the folder to your repository using the command below:

git remote add origin https://github.com//

Finally, push it:

git push -u origin main

Step 3 – Creating the Repository

As you may have noticed, there’s a sample bug in the code. In any programming language, you must call the function using its exact name. But in this case, print_hi was called instead of show_message. As a team member or an open-source contributor, you can fix this by opening a PR.

First, create a branch called fix-function-call and checkout into the branch using the commands below:

git branch fix-function-call

git checkout fix-function-call

Next, replace the name print_hi with show_message in all the .py files, then commit and push the changes.

git commit -am “changed function name”

git push --set-upstream origin fix-function-call

Now, open your repository in GitHub. You’ll see the following card:

Compare and pull request button

Click on Compare & pull request. On the next page, click the Create pull request button.

Once the gitStream automation has finished running, you’ll see the domain reviewer assigned tag. Additionally, a comment has been created.

Domain reviewer assigned tag

Add this Dijkstra’s Shortest Path Algorithm script just below the show_message function in each of the .py files again. These scripts calculate the shortest path for a node in a graph.

Commit the changes and then push the code.

git commit -am “updates”

git push

PR Approved

Creating a Safe Change

For the final automation, you’ll add text to the README.md file created earlier. Create a new branch and checkout to it. You do so because you’ll need a new PR to demonstrate this automation.

git checkout main

git branch update_docs

git checkout update_docs

Then, add this sentence to the README.md file:

Continuous Merging is very beneficial to the Open-Source Community.

Commit and push.

git commit -am “updated the docs”

git push --set-upstream origin update_docs

When the checks are done, you’ll see a different label with the PR already approved.

PR Approved

Help Developers Make the Most of Their Time…

Reviewing and merging PRs are crucial in contributing to software development and enhancing team productivity. However, being unable to classify PRs by complexity can lead to long wait times or much back-and-forth in the review process.

CM remedies this issue by classifying PRs based on the complexity, automating some actions including tagging the appropriate reviewers, assigning them PRs, and approving PRs among others to reduce the backlog.

Check out gitStream to add CM to your existing repos.

Learn more about gitStream today!

The post How to Add Estimated Review Time and Context Labels to Pull Requests appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/03/19/how-to-add-estimated-review-time-and-context-labels-to-pull-requests/feed/ 0