Holly Davis, Author at ProdSens.live https://prodsens.live/author/holly-davis/ News for Project Managers - PMI Sat, 22 Jun 2024 10:20:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Holly Davis, Author at ProdSens.live https://prodsens.live/author/holly-davis/ 32 32 Letz Understand NPM Versioning: A Beginner’s Guide https://prodsens.live/2024/06/22/letz-understand-npm-versioning-a-beginners-guide/?utm_source=rss&utm_medium=rss&utm_campaign=letz-understand-npm-versioning-a-beginners-guide https://prodsens.live/2024/06/22/letz-understand-npm-versioning-a-beginners-guide/#respond Sat, 22 Jun 2024 10:20:28 +0000 https://prodsens.live/2024/06/22/letz-understand-npm-versioning-a-beginners-guide/ letz-understand-npm-versioning:-a-beginner’s-guide

Introduction In This Blog, We will understand more about the versioning of npm packages or any application. […

The post Letz Understand NPM Versioning: A Beginner’s Guide appeared first on ProdSens.live.

]]>
letz-understand-npm-versioning:-a-beginner’s-guide

Introduction

In This Blog, We will understand more about the versioning of npm packages or any application.

yes image

[ Hello Buddies ]

Let’s get started…

Now, Let’s take an example of one npm package called “express”.

versioning of npm

We have an npm called “express” and its version is “^4.19.2”.
Now, let’s break down that version with the use of dot(.) and understand each part individually.

"^4.19.2"
  1. First Part = 4
  2. Second Part = 19
  3. Third Part = 2

The first part we can call a major part of the version.

1 – First Part > [ MAJOR ] = [ 4 ]

If in the future, “4” will become “5” it means that a major update will be there which might break the backward compatibility.

2 – Second Part > [ MINOR ] = [ 19 ]

It means that, if the current version is “19” and it will be 20 or more then it will have some minor fixes and it’s recommended to update to the new version.

3 – Third Part > [ PATCH ] = [ 2 ]

It means that, if the current version is “2” and it will be 3 or more then it means that some small bug fixed will be there which will not affect any functionality and it’s optional also so if we need to update then update it or it’ll not affect anything.

Now, Let’s understand the what is the meaning of “^” symbol in the version.

The caret (^) symbol means that it will accept only minor and patch updates. If there is a change in the MAJOR version, it will not accept it.

thanks minions

Thanks!!

The post Letz Understand NPM Versioning: A Beginner’s Guide appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/22/letz-understand-npm-versioning-a-beginners-guide/feed/ 0
Prompt Fuzzer: how to keep your agents on the right path https://prodsens.live/2024/05/20/prompt-fuzzer-how-to-keep-your-agents-on-the-right-path/?utm_source=rss&utm_medium=rss&utm_campaign=prompt-fuzzer-how-to-keep-your-agents-on-the-right-path https://prodsens.live/2024/05/20/prompt-fuzzer-how-to-keep-your-agents-on-the-right-path/#respond Mon, 20 May 2024 07:20:36 +0000 https://prodsens.live/2024/05/20/prompt-fuzzer-how-to-keep-your-agents-on-the-right-path/ prompt-fuzzer:-how-to-keep-your-agents-on-the-right-path

Good morning everyone and happy MonDEV! ☕ How are your coding experiments going? I hope everything is going…

The post Prompt Fuzzer: how to keep your agents on the right path appeared first on ProdSens.live.

]]>
prompt-fuzzer:-how-to-keep-your-agents-on-the-right-path

Good morning everyone and happy MonDEV! ☕

How are your coding experiments going? I hope everything is going well and that your inspiration is always abundant!

In the past few weeks, during my wanderings through the various tools on the web, I have come across many tools related to the world of AI; not that it surprises me, being a bit of a hot topic at the moment, but recently I have found more than usual and also some interesting ones, so it’s obvious that it will be a recurring topic on our Monday mornings!

I am actually thinking of creating a curated list of the AI tools I have found, so that they can all be easily retrieved in an organized way, for any need. What do you think, would you be interested? Let me know! 😉

Meanwhile, let’s go and see today’s tool, belonging to this world.

Among the various proposals we have seen tools to interface with LLMs, others that allow the easy creation of agents (and we will soon return to the topic), and also extensions that integrate LLM models with the browser (before it became mainstream).

However, there is a topic we have never touched upon, that is the security of prompts. Indeed, those who are a bit passionate about this area will know how easy it is, without the correct instructions, to make an agent exit its conversation context and touch on topics that should not concern it. A branch that is developing in parallel to prompt engineering is the one that deals with prompt security.

Within this framework, we find prompt-fuzzer, today’s tool. Prompt fuzzer is an open-source tool written in Python and usable via CLI which, given a prompt, performs a series of different tests based on different attacks studied in this period of AI sector growth, to verify its solidity.

From the interface, you have the possibility to modify your prompt, rerun the various tests, and check which prompts are more vulnerable and which are less.

For example, this is a test I did using a prompt I use to help me correct the articles I write:

initial-test

the result was not optimal, especially because it lacked indications that set limits on the requests made to the model.

I then updated the prompt with more precise indications and the result visibly improved:

better-result

Within the README in the Github repository, you will also find a general explanation of the various types of attacks described in this table, so you can work better on improving your prompts.

Since AI is becoming more and more present in our projects, both personal and professional, I think having the ability to test how resistant they are to at least the most known attacks is a good idea and can avoid potential unwanted mistakes. 😉

What do you think of this tool? Had you already informed yourself about how to make a prompt secure for your agents?

Let me know your thoughts 😁

For now, I just have to wish you a good week!

Happy Coding 0_1

The post Prompt Fuzzer: how to keep your agents on the right path appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/05/20/prompt-fuzzer-how-to-keep-your-agents-on-the-right-path/feed/ 0
Caption This! 🤔💭 https://prodsens.live/2024/01/21/caption-this-%f0%9f%a4%94%f0%9f%92%ad-6/?utm_source=rss&utm_medium=rss&utm_campaign=caption-this-%25f0%259f%25a4%2594%25f0%259f%2592%25ad-6 https://prodsens.live/2024/01/21/caption-this-%f0%9f%a4%94%f0%9f%92%ad-6/#respond Sun, 21 Jan 2024 00:25:17 +0000 https://prodsens.live/2024/01/21/caption-this-%f0%9f%a4%94%f0%9f%92%ad-6/ caption-this!-

Can you come up with the wittiest caption to explain what’s happening here? Follow the DEVteam for more…

The post Caption This! 🤔💭 appeared first on ProdSens.live.

]]>
caption-this!-

Can you come up with the wittiest caption to explain what’s happening here?

A Renaissance painting of a woman looking distressed while pouring out a drink

Follow the DEVteam for more online camaraderie!

The post Caption This! 🤔💭 appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/21/caption-this-%f0%9f%a4%94%f0%9f%92%ad-6/feed/ 0
Introducing Loco: The Rails of Rust https://prodsens.live/2023/12/21/introducing-loco-the-rails-of-rust/?utm_source=rss&utm_medium=rss&utm_campaign=introducing-loco-the-rails-of-rust https://prodsens.live/2023/12/21/introducing-loco-the-rails-of-rust/#respond Thu, 21 Dec 2023 13:24:34 +0000 https://prodsens.live/2023/12/21/introducing-loco-the-rails-of-rust/ introducing-loco:-the-rails-of-rust

Although Ruby on Rails is not as popular as it used to be, in its prime it was…

The post Introducing Loco: The Rails of Rust appeared first on ProdSens.live.

]]>
introducing-loco:-the-rails-of-rust

Although Ruby on Rails is not as popular as it used to be, in its prime it was a force to be reckoned with. Many successful businesses were built from it – Airbnb and Shopify being two of many big names coming out of this, although more recently Shopify has started experimenting with other languages and late last year announced that they would be officially supporting Rust.

This has led to many frameworks attempting to emulate the Rails philosophy – Loco.rs being no different. In this case, however, it aims to solve a long-standing issue within the Rust web backend framework in terms of there being no truly batteries-included Rust framework. Let’s talk about it.

Ruby on Rails was popular because it is a framework that does all the heavy lifting for you and abstracts away a lot of the heavy lifting – which means there is a very short gap between thinking of the business logic for an idea and time to full productivity. This is a great thing for a few reasons, especially in web development: you can ship faster without needing to do any of the boilerplate, you can rely on the framework to do all of the difficult low-level things for you and you don’t need to be necessarily fluent in Ruby to use it (although it helps massively if you are!). This is something that a lot of web developers resonate with, as evidenced by the huge number of developers who use Laravel, a PHP framework that is very similar to Ruby on Rails.

It achieves these things by gating everything behind a command-line interface: you use the command line to start the web service itself, you use it for migrations, and job processing as well as creating new controllers, models, and more. For example, you can generate a database model by using rails generate model test which will generate a model. You can then create a route controller by using rails generate controller test, which will generate a controller called TestController – you can do the same for migrations by using rake generate migration.

This is somewhat at odds when it comes to Rust which is why Loco.rs is interesting: Rust is a language that allows you to get into the meat of the matter when it comes to low-level details, which means that it tends to attract programmers who don’t mind doing the extra work because they would rather things be either implemented to their standard or because they want to understand how everything works so that when something breaks, they know how to fix it. In addition, Loco is not itself a standalone framework – currently it uses axum under the hood, alongside sidekiq-rs for job processing and sea-orm for migrations.

Getting Started with Loco

To get started with the Rust Loco crate, you need to use their CLI which you can install by using the following:

cargo install loco-cli

You can start a new project by using loco new – it’ll ask you what the name of your app is and then what kind of app you want. For this article, we’ll be talking about the full Rust SaaS starter application.

Routing in Loco

Although Loco uses axum under the hood, it abstracts some things away into config files that you can find in the config folder.

The Axum service that gets run by the application implements the Hooks trait from loco_cli, which requires several functions to use – going to src/app.rs shows that we have functions for registering routes, getting the app name, connecting workers and registering tasks, truncate tables and seed data into the database. We can also add extra functions to the router that get hooked into the CLI as well – after_routes() which is for adding things like middleware, and before_run() which allows you to carry out operations before your application itself starts. Note that any commands we use through the project CLI to generate things will automatically be appended to the app.rs file – no need to do it yourself!

To add a controller, we need to run cargo loco generate controller test from the project root which generates a controller called test and simultaneously adds a new file in the controllers folder. Then we can create any routes we need to and append them to the router in the same file, and it’ll automatically be added to the application – no further work required! Your new controller should look like something like this:

#![allow(clippy::unused_async)]
use loco_rs::prelude::*;

pub async fn echo(req_body: String) -> String {
    req_body
}

pub async fn hello(State(_ctx): State<AppContext>) -> Result<String> {
    // do something with context (database, etc)
    format::text("hello")
}

pub fn routes() -> Routes {
    Routes::new()
        .prefix("test")
        .add("https://dev.to/", get(hello))
        .add("https://dev.to/echo", post(echo))
}

Now you can add any routes you want to this file and it will be put under this controller when it’s added to the routes() function in the file. Route-wise, this means it’ll be all under the same route. You can access the database connection from the provided State – unlike in Axum normally, you don’t need to create this yourself.

The great thing about Loco’s routing is that everything you know from using Axum can be applied here – so if you know how to write your own extractors, write middleware, and other things this can all be used in Loco since it essentially builds on top of Axum.

Once you’ve finished adding all the controllers and routes you want, you can use cargo loco routes to display all of the routes your application currently has.

Models in Loco

Models in Loco represent the database models used by sea_orm. To get started, you’ll want to run the following:

cargo loco generate model 

This will then generate a model that you can use in your application. You can also initialise with extra fields to generate a full model:

cargo loco generate model movies title:string rating:int

Note that if you want to initialise with extra fields, you will want to check the reference docs so you can find what fields you need to use. Once you’re done adding all the models you need to, you can simply run the following two commands to get back the migrations and entities required:

cargo loco db migrate
cargo loco db entities

When you generate a blank model, when you go to the model file you will probably find something that looks like this:

use sea_orm::entity::prelude::*;

use super::_entities::notes::ActiveModel;

impl ActiveModelBehavior for ActiveModel {
    // extend activemodel below (keep comment for generators)
}

When we use this model in our controller, typically speaking we won’t reference the struct that holds the model itself – instead we reference the ActiveModel or Entity/Model – a blank model file looks like this:

use sea_orm::entity::prelude::{ActiveModelBehavior};

use super::_entities::notes::ActiveModel;

impl ActiveModelBehavior for ActiveModel {
    // extend activemodel below (keep comment for generators)
}

We can extend the behaviour of our ActiveModel by adding a before_save() method as mentioned before, like so:

use sea_orm::entity::prelude::{ActiveModelBehavior};

use super::_entities::notes::ActiveModel;

impl ActiveModelBehavior for ActiveModel {
    // extend activemodel below (keep comment for generators)
    async fn before_save<C>(self, _db: &C, insert: bool) -> Result<Self, DbErr>
    where
        C: ConnectionTrait,
        {
            println!("This is happening before we save something!");
            Ok(self)
    }
}

The ActiveModelBehaviour trait implementation (from sea_orm) allows us to define behaviour for an ActiveModel – more specifically, we can add methods for before and after saving a model, as well as before and after deleting a model. We can also extend the behaviour of our model by adding extra methods to it:

impl super::_entities::users::Model {
    // .. your own methods
}

Now we can use it in a handler function by loading the item from the database – then we can do whatever we need to with the data:

async fn load_item(ctx: &AppContext, id: i32) -> Result<Model> {
    let item = Entity::find_by_id(id).one(&ctx.db).await?;
    item.ok_or_else(|| Error::NotFound)
}

pub async fn update(
    Path(id): Path<i32>,
    State(ctx): State<AppContext>,
    Json(params): Json<Params>,
) -> Result<Json<Model>> {
    // use sea_orm to load an item based on the id
    let item = load_item(&ctx, id).await?;

    // turn the item into an ActiveModel that we can then use
    let mut item = item.into_active_model();

    // update the parameters of the current item with the new properties
    params.update(&mut item);

    // feed the new item back into the database
    let item = item.update(&ctx.db).await?;

    // return the updated item
    format::json(item)
}

However – that isn’t all that Loco.rs has to offer. We can also use the loco_rs Validator struct to be able to verify a new model before needing to do anything with it! A use case for this, for example, might be if we needed to check if an email is a valid email. You can check this out below:

[derive(Debug, Validate, Deserialize)]
pub struct ModelValidator {
    #[validate(length(min = 2, message = "Name must be at least 2 characters long."))]
    pub name: String,
    #[validate(custom = "validation::is_valid_email")]
    pub email: String,
}

impl From<&ActiveModel> for ModelValidator {
    fn from(value: &ActiveModel) -> Self {
        Self {
            name: value.name.as_ref().to_string(),
            email: value.email.as_ref().to_string(),
        }
    }
}

Job Processing in Loco

Like with everything else in Loco, you can also generate workers and tasks via the CLI. Running cargo loco generate task or cargo loco generate worker will let you generate a task or a worker at will.

Under the hood, Loco uses sidekiq-rs to do job processing – which is a Rust re-implementation of its Ruby counterpart, sidekiq.rb. Once the worker is generated, you want to go to the workers folder and check out the file you made – it will have a struct for the worker itself, a struct that holds the arguments that the worker will take, an implementation of the AppWorker trait for the worker and then an async trait implementation that lets the worker do something.

You can then run it like so:

ReportWorkerWorker::perform
    (
        &boot.app_context, 
        ReportWorkerWorkerArgs {}
    )
.await
.unwrap();

As you can see, we don’t need to initialise the struct to use it – we can call the method from the struct directly and it will work assuming the arguments are valid – as you can see here, no arguments are required since the args struct does not have any fields.

Deploying Loco

Currently, Loco.rs allows you to generate a deployment by using the following comamnd:

cargo loco generate deployment

This lets you choose between Docker and Shuttle. When choosing Docker it will generate a Dockerfile that you can use to deploy anywhere, but when you pick Shuttle it will automatically generate everything you need for a Shuttle deployment – no further work required! You can then use the Shuttle CLI to start a new project and deploy it:

// note that if you want to avoid using the --name flag
// you should use the name key in Shuttle.toml
cargo shuttle start --name 
cargo shuttle deploy --name 

Finishing Up

Thanks for reading! Loco is a great framework that shows a lot of promise, and is growing very quickly. Building a Rest API in Rust has never been made easier!

Interested in more?
Check out the full tour of Loco here.
Check out their discussions here.

The post Introducing Loco: The Rails of Rust appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/21/introducing-loco-the-rails-of-rust/feed/ 0
PPC Keyword Research: The Complete Guide https://prodsens.live/2023/12/21/ppc-keyword-research-the-complete-guide/?utm_source=rss&utm_medium=rss&utm_campaign=ppc-keyword-research-the-complete-guide https://prodsens.live/2023/12/21/ppc-keyword-research-the-complete-guide/#respond Thu, 21 Dec 2023 13:24:24 +0000 https://prodsens.live/2023/12/21/ppc-keyword-research-the-complete-guide/ ppc-keyword-research:-the-complete-guide

Pay-per-click (or PPC) marketing can feel like a daunting task. From creating ads to monitoring performance and understanding…

The post PPC Keyword Research: The Complete Guide appeared first on ProdSens.live.

]]>
ppc-keyword-research:-the-complete-guide

Pay-per-click (or PPC) marketing can feel like a daunting task. From creating ads to monitoring performance and understanding bidding strategies, there’s a lot to take in. But PPC keyword research is an often under-appreciated, yet necessary, component of search engine marketing.

With proper keyword research, you can more accurately build ads and landing pages that encourage clicks from users. Completing PPC keyword research can ultimately lead to more conversions on your website and a positive return on your ad spend, which is why it’s worthwhile.

Free Guide, Template & Planner: How to Use Google Ads for Business

In this post, I’ll walk you through everything you need to know about PPC keyword research — from why it matters to how to begin researching and beyond.

Table of Contents

What is PPC keyword research?

PPC keyword research refers to the process of identifying keywords to include in pay-per-click advertising campaigns, usually through Google Ads or other search engine marketing platforms.

The goal is to identify keywords you want to bid on as part of your PPC campaigns. Then, your ads will display when users search for those keywords.

Here’s how keywords are categorized:

  • Average Monthly Searches: The number of times the keyword is searched per month on the search engine.
  • Cost-per-Click: An estimate of how much you’ll pay each time a user clicks your ad when it appears for this keyword.
  • Competition (CMP): A score from 0-100 in Google Ads’ Keyword Planner that indicates the level of competition for placing an ad for a keyword. When looking for new keywords in Keyword Planner, these are denoted as low, medium, or high.
  • Top of Page Bid: An estimate of how much you’ll need to bid on a particular keyword to ensure your ad appears in the keyword’s search results. Since this can fluctuate, Keyword Planner gives you both a low-range and high-range estimate.

These basic metrics are important because they can help you estimate the amount you’ll need to spend per month on your ads for them to be effective.

Different Types of PPC Keywords

We can divide keywords in several different ways. Each category is important to understand so that you can craft relevant ads that properly meet the search intent behind the keywords.

Some of these categories are provided directly by Google in your Google Ads account. SEM tools provide others as a helpful way to further guide your PPC keyword research.

Search Intent

There are a few ways to categorize keywords by search intent, but some common categories provided by SEM tools include:

  • Informational: The searcher is looking for information, such as definitional content or “how-to” guides.
  • Navigational: The searcher is looking for a specific website or company, often to log in to an existing account.
  • Commercial: Searchers are looking for products or services, typically in research mode. They may be looking for something specific, but not necessarily to make a purchase at that moment.
  • Transactional: These types of keywords have the highest purchase intent. The user is looking to take action right at that moment, whether that action is booking tickets, buying an item, or signing up for a service.

Different searches have different intent behind them. Sometimes, people are just looking for information. These informational keywords lend themselves well to search engine optimization (SEO), which focuses on organic search results. However, for PPC campaigns, they are less effective.

The most effective keywords to target for PPC are those with high search intent, mainly commercial or transactional keywords.

With these keywords, you can capture search traffic when purchase intent is highest. Your ads are likely to attract more website traffic and convert more users to customers for these keywords.

Many brands will choose to also bid on keywords in the navigational category, particularly for searches of their own company. It’s a way to double down on traffic capture alongside the organic search results for your company.

You can also leverage navigational (or “branded” keywords) to bid on your competitor’s brand names and potentially steal some of their search traffic. (Sneaky, I know.)

Keyword Length

Keywords can also be divided into short-tail keywords or long-tail keywords.

Short-tail keywords are the most popular way to search for a topic, product, etc. They have the highest monthly search volume, but they’re not very specific.

A good example would be “boots.” According to Keyword Planner, this term is searched between 10,000 to 100,000 times monthly.

On the other hand, long-tail keywords are less popular ways to search for topics or items. They usually contain more qualifying terms but are searched less frequently per month.

Using the example above, a long-tail keyword in this instance would be “brown women’s ankle boots.” This keyword is a lot more specific and might yield better results in a PPC ad campaign. However, it’s searched far less frequently at 10 to 100 times monthly.

Keyword Match Types

Depending on your product, service, or campaign, you might want to be highly specific with your keywords. At other times, you might want Google to do some of the PPC keyword research for you. That’s where match types come in.

When you add keywords to your PPC campaign, you can choose between:

  • Exact Match
  • Phrase Match
  • Broad Match
  • Negative Keywords

If you select Exact Match, Google will only display your ads for exactly the keyword term you have entered. It will also include extremely close matches, such as small spelling errors, plural versions, or a reversed version. For example, your ad will display under both “women’s boots” and “boots women” searches.

Phrase Match slightly expands your selected keywords so your ad will show up for variations of the search term. If you’re targeting “women’s boots,” your ad might also display under search terms like “best women’s boots.”

When you select Broad Match, Google will display your ads for phrases that are related to your keyword. In the same example, your ad may also display under search terms like “women’s doc martens” or “brown ankle boots.”

Negative Keywords are keywords that you instruct Google not to show your ad for at all. In your boots campaign, for example, you might want to use a Broad Match strategy. But you’ll want to instruct Google not to display your ads for terms like “men’s boots” or “women’s sandals” by adding them as Negative Keywords. Similarly, you might want to exclude terms like “free” or “sale.”

Why does PPC keyword research matter?

Understanding the keywords you want to target — and how you want to target them — directly impacts the effectiveness and cost of your PPC campaigns. That’s why PPC keyword research matters.

But let’s get more granular with why it’s crucial for your business.

Ad Relevance and Quality Score

Relevancy isn’t just important as a fundamental advertising principle. Google takes the relevance of your ads against the keywords you’re targeting and uses that information in a way that impacts your ad performance.

Alongside your ads, Google looks at the landing page you’re using and your past performance on Google Ads (the number of clicks your ads have earned) and gives you a Quality Score. The score is between 1 and 10. It then measures how relevant your ads and landing page are to the keywords you’re targeting.

When it comes to bidding on keywords, Google favors ads with a high Quality score. So, if your score is low, your ad strategy will be ineffective, or you’ll have to spend a lot more to appear in searches for your chosen keywords.

Return on Advertising Spend (ROAS)

At first glance, it can seem easy to load a campaign with perfect keywords. They relate exactly to your product or service and the purchase intent is high.

But this kind of strategy can quickly lead to an extremely high ad spend — and ads that don’t perform.

PPC keywords should be carefully categorized into separate campaigns, with specific landing pages built for each campaign. The estimated cost-per-click and the competition metrics are all indicators you can use to figure out how much your campaign will cost to run.

By paying close attention during the keyword research campaigns, you can ensure a positive return on your PPC investment—also known as Return on Ad Spend (ROAS). This means that the revenue you earn from your ads outpaces what you’re spending to run them.

Seasonal Targeting

Just like purchasing habits, lots of keywords have seasonal fluctuations. The searches for given keywords can spike at different times of the year or can trend upwards sharply based on market conditions or global events.

Careful keyword planning means you can target keywords and how much you spend on them at just the right time.

You can use tools such as Google Trends (or “Glimpse”) to see how keyword searches spike at different times. Here are the search trends for our “women’s ankle boots” example over the past five years:

Without fail, the searches for this term spike massively from October to November and start to trend back down around January. This type of insight can improve how efficient your PPC campaigns are, as you can see when to turn them off and switch them on for maximum results.

How to Research PPC Keywords

Do you need help with the step-by-step process for PPC keyword research? I’m here to help.

In this guide, imagine we’re an eCommerce store launching a new range of women’s ankle boots for the fall/winter season to guide some of our decisions.

1. Create your Google Ads account.

First things first, head to Google Ads and set up your account. Ensure you’re signed into the Google Account under which you want to run your ads. Click “Sign In.”

When you create your account, you’ll need to select whether you’re an individual or an organization, confirm your address, and provide payment details.

2. Switch your account to expert mode.

Google defaults your Google Ads account to Smart Mode.

This makes it very straightforward to set up and run ads, but you won’t be able to use Keyword Planner or see a lot of the detailed metrics you want to double-check for your PPC keyword research.

So, before you get started, use the Settings icon on the right-hand side of the gray toolbar and select “Switch to Expert Mode.”

3. Access keyword planner.

Use the hamburger icon in the top left corner to open the menu. Select “Tools” and click “Keyword Planner.”

You’ll see a screen with two options: “Discover new keywords” or “Get search volume and forecasts.” At this stage of your ad planning, you’ll click the first option to start checking out your keyword options.

4. Start finding keywords.

Now comes the fun part. Think carefully about the products or services you’re trying to promote. Your PPC campaign should be specific to a category, especially if you offer a wide variety of product or service options.

Start entering keywords that relate to your product. Here are some that you might enter to kick off a campaign for online sales of women’s ankle boots:

Tip: If you’re really stuck for keywords, try using the option to “Enter a site.” Keyword Planner will use the URL to pull a list of PPC keywords found on the web page you’ve entered. You can use your own site or a competitor site to start generating ideas.

Here are the results Keyword Planner gives us with these starting keywords:

But the planner also gave us a huge list of keyword ideas that we might want to use in our campaign. Unfortunately, many of the top ideas relate to Doc Martens boots, and we don’t sell those.

That doesn’t mean we should discount the entire list, however, as you’ll see in the next step.

5. Refine your keyword ideas.

Keyword Planner gives you a few different ways to refine the list of keyword ideas it has provided for you.

First, you can use the keyword suggestions under the “broaden your search” box to add in new ideas:

Next, you can use the “Refine” button to remove different options from the keyword ideas. For example, you can remove all brand-related searches so that terms like “Doc Marten” or “Timberland” are removed from the keyword ideas.

Similarly, Google will interpret the keywords you’re looking at and give you lots of options to refine the provided keyword ideas. In our example, Google enables us to remove different colors, styles, materials, and more from the keyword ideas.

All you have to do is de-select various options to refine your keyword idea list:

Now, Keyword Planner is giving us a much more relevant set of keyword ideas for our women’s ankle boots campaign:

6. Save your keywords and test results.

Start selecting the box to the left of the keywords you think would be worth targeting for your PPC campaign. This can take some time as, even with refinement, Keyword Planner will provide an extensive list of ideas.

Once you’ve selected a group of keywords you’d like to explore targeting, you can add them to an existing Ad Group or create a brand new one:

Create your ad group, and then select “Add Keywords” in the same dark blue bar. You’ll then be able to view your list of selected keywords under the “Saved Keywords” tab.

At any stage, you can click the circular + button to add more keywords to your list, or you can go straight to “Create Campaign” in the top right to start building your ad copy and bidding strategy using these keywords.

7. Keep an eye on your forecast results.

As you continue to refine your keywords by adding new ideas or negative keywords, it’s a good idea to keep an eye on the Forecast tab. This gives you an estimate of the results you can expect from the keywords you have selected:

However, these forecasts should be used as a guideline only. The actual results of your campaign will depend on the quality of your ads, landing page, keyword strategy, and bidding strategy.

8. Rinse and repeat.

Depending on how many campaigns you’d like to run, you can repeat this process over and over for your keyword research, creating a new Ad Group for each new category of keywords.

Staying organized with your saved keywords and ad groups is key to creating highly targeted, successful campaigns that drive a positive ROAS.

Best PPC Keyword Research Tools

Curious which PPC keyword research tools are worth using? I’ve rounded up a few of my favorites.

1. Google Keyword Planner

Sometimes, the best place to start with PPC keyword research is to get the information right from the horse’s mouth. With Keyword Planner, you’re getting data directly from Google, so you can rely on its accuracy.

That being said, it’s not the smoothest tool, so it might feel a little overwhelming for beginners. You’re also limited to Google, even if you also want to run PPC campaigns on other search engines like Bing or Yandex.

What I like: The best part about Keyword Planner is that it’s completely free to use. But it also gives you the ability to move straight from keyword research into implementing your PPC campaign all from one place.

2. Ahrefs

Ahrefs is a popular tool for both organic and paid search engine marketing. While the keyword research portion of the dashboard is more heavily geared towards organic Search Engine Optimization (SEO), it gives you plenty of PPC data, too.

Like Keyword Planner, you can organize keywords into lists. When you’ve narrowed down a starting list of keywords, you can export the list to paste them into Google Ads as needed.

What I like: Ahrefs is a visually appealing tool and a little easier to wrangle than Keyword Planner. It’s also more intuitive to generate new keyword ideas off the bat.

3. SEMRush

SEMRush has a lot of functions for digital marketers, including SEO, social media posting, content marketing, and more. When it comes to PPC keyword research, it offers similar functionality to Ahrefs but with more organization.

SEMRush has a specific PPC keyword tool for you to easily analyze, group, and remove keywords from different lists according to the campaigns you want to run.

For an extra monthly spend, you can also access the AdClarity extension to sneak a peek at competitors’ ad spend and performance.

What I like: The ability to easily filter out duplicate keywords across lists helps ensure your ad groups remain clean and are not competing with each other to bid on the same keywords.

4. Keywords Everywhere

Keywords Everywhere is a Chrome extension that enables you to examine keyword data right in the Google search results.

Simply enter a keyword, and you’ll get data on the search volume, Cost-per-Click (CPC), search trends, and competition.

It’ll even give you this data on related search terms in a separate box to the right-hand side of your Google search results.

What I like: Keywords Everywhere is very cost-effective, especially for beginners or marketers who run a small number of campaigns. You can buy 100k credits for $15 and simply use the credits as you need rather than being tied into a monthly subscription.

Start your PPC keyword research today.

Whether you’re a total beginner to Google Ads or a seasoned marketer looking to refine your PPC strategies, keyword research is the place to start.

Effective PPC keyword research creates efficiencies in your ad spend and performance so your ads can continually provide successful bottom-line results.

Go get researching.

New Call-to-action

The post PPC Keyword Research: The Complete Guide appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/21/ppc-keyword-research-the-complete-guide/feed/ 0
Gated Content: What Marketers Need to Know [+Examples] https://prodsens.live/2023/08/01/gated-content-what-marketers-need-to-know-examples/?utm_source=rss&utm_medium=rss&utm_campaign=gated-content-what-marketers-need-to-know-examples https://prodsens.live/2023/08/01/gated-content-what-marketers-need-to-know-examples/#respond Tue, 01 Aug 2023 11:25:31 +0000 https://prodsens.live/2023/08/01/gated-content-what-marketers-need-to-know-examples/ gated-content:-what-marketers-need-to-know-[+examples]

As William Shakespeare once wrote, “To be or not to be, that is the question.” Marketers have a…

The post Gated Content: What Marketers Need to Know [+Examples] appeared first on ProdSens.live.

]]>
gated-content:-what-marketers-need-to-know-[+examples]

As William Shakespeare once wrote, “To be or not to be, that is the question.” Marketers have a similar classic debate: gated versus ungated content.

While 80% of B2B content marketing assets are gated and lead generation is one of the top objectives for marketers, it’s not an open and shut case. That’s why we’ve gathered everything you need to know about gated content in this post.

Create surveys, contacts, and happy customers using HubSpot's free form  builder.

Here, we’ll explore what gated content is and how it compares to ungated content. Then, we’ll dive into gated content best practices and look at some examples.

Table of Contents

What is Gated Content?

Gated vs. Ungated Content

Best Practices for Gated Content

Gated Content Examples

So, how does gated content work?

Usually, users arrive at your website and see a CTA or pop-up that offers them access to a piece of content in exchange for their information. This could be their email address in exchange for a content offer, for example.

It’s important to note that gated content for inbound marketing is free and not hidden behind a paywall. Users just need to submit their information to access the content.

Now, you might be wondering, “Why would I hide my content from my audience?”

Typically, the goal of gated content is to generate leads. Marketers will create targeted content for their audience and use it to attract leads. Gated content isn’t used for brand awareness or visibility campaigns because the nature of hidden content doesn’t allow for high traffic.

Below, let’s discuss the pros and cons of gated versus ungated content.

As you can probably tell, gated and ungated content both serve different purposes. But you might be wondering what the pros and cons are. Let’s dive into it now.

Pros and Cons of Gated Content

Gated Content

Ultimately, gated content is meant to generate leads that you can nurture into prospects through your marketing efforts, whereas ungated content is meant to increase traffic and improve trust with your audience.

Both types of content are valuable and should be included in your content marketing strategy.

After reading this list, you might be wondering, “How do I know if I should gate my content?”

Well, it all depends on your goals — brand visibility or lead generation.

Additionally, consider the type of content. Longer form content like an ebook is suited to gated content, while shorter form content such as blog posts are better off as ungated content.

Once you’ve decided to create a piece of gated content, you’re probably curious about how to get started. Let’s review some best practices below.

1. Create content for each stage in the buyer’s journey.

When a prospect goes through the buyer’s journey, they’ll go through three stages: awareness, consideration, and decision.

Here’s a quick rundown of each stage:

HubSpot's buyer's journey.During each stage, it’s important for your audience to have content that meets them where they are.

For instance, visitors in the awareness stage are probably interested in reading an ebook. On the other hand, a visitor in the decision stage might prefer a product demo or webinar.

That’s why it’s important that your content offers are designed for each stage of the buyer’s journey. If your gated content is aligned with their journey, your audience is more likely to convert.

2. Complete a competitive analysis.

Once you’ve brainstormed some content ideas for each stage of the buyer’s journey, it’s time to conduct a competitive analysis.

In a competitive analysis, you’ll research what your competitors are doing. This means looking up what type of content offers they offer. Pay attention to what content is gated versus ungated.

This will give you a good idea of what content of yours should be gated.

3. Provide incentive.

As an inbound marketer, you know that providing value is of the utmost importance.

Your content offer shouldn’t be a quick blog post. Instead, your gated content should provide actionable, valuable content.

Just as importantly, your gated content should be relevant to your audience.

When your content provides true value, it gives your audience an incentive to fill out that form and give you their contact information.

4. Build a strong landing page.

When a user clicks on a CTA for a content offer, they’re usually led to a landing page. So, one of the best practices for gated content is to build a strong landing page.

For example, HubSpot’s State of AI Report landing page contains a strong headline, compelling copy, a section for FAQs, and a simple form.

Screenshot 2023-07-26 at 3.58.05 PM

Chances are, your landing page will include a form where visitors can input their contact details in exchange for your content offer. It’s important that your form is straightforward, easy to use, and user-friendly.

HubSpot offers a free online form builder that enables you to create and customize forms with a drag-and-drop form maker.

5. Segment your audience.

Once your audience has downloaded your gated content and you receive their email address, it’s time to segment your email lists.

This will help you develop email marketing campaigns that are targeted and effective.

Additionally, segmenting your audience means you can send nurturing emails to move those leads to prospects.

6. Measure the analytics.

When you’ve decided to gate a certain piece of content, that means you can track conversions and measure your analytics.

As with any marketing strategy, measuring your success is extremely important. This data will help you understand your audience better and improve your content strategy.

Now that you know some best practices for creating gated content, let’s look at types of content and examples of what this will look like in action.

Gated Content Examples

1. White papers.

A great example of gated content is a white paper. A white paper is an authoritative, in-depth report on a specific topic.

Usually, these are long-form pieces of content that are interesting and valuable to your audience.

White papers make great gated content because of the value they provide. Additionally, it helps your brand become an industry expert on a topic. When you’re a trusted expert, people want to know what you have to say.

This means you’ll get more people to download your offer.

2. Ebooks.

An ebook is another popular type of gated content. Unlike a white paper, an ebook is usually a shorter guide on a specific topic.

Ebooks can also give your brand authority and build trust with your audience. Usually, ebooks are used in the awareness and consideration stage of the buyer’s journey.

3. Templates.

One of my favorite forms of gated content is the template. Providing a template is a tactical, actionable piece of content.

The perceived value of a template is much higher than that of an ebook and a white paper, which means your audience is more likely to input their contact information to receive it.

Templates are a great gated content offer for folks in the consideration and decision stage of the buyer’s journey.

4. Webinars.

With a webinar, you’ll educate your audience to learn more about a topic. You’ll develop trust, build relationships, and hopefully, inspire.

For prospects who are in the decision stage of the buyer’s journey, webinars are an excellent gated content offer.

Again, webinars have a high perceived value, which makes your audience more likely to fill out that form.

Back to You

With gated content, it’s important to consider what types of content you’re offering and make sure it’s suited to your audience. Ultimately, gated content should be targeted and help you generate leads.

New Call-to-action

The post Gated Content: What Marketers Need to Know [+Examples] appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/08/01/gated-content-what-marketers-need-to-know-examples/feed/ 0
Dockerize Your Nextjs App Like a Pro: Advanced Tips for Next-Level Optimization https://prodsens.live/2023/04/09/dockerize-your-nextjs-app-like-a-pro-advanced-tips-for-next-level-optimization/?utm_source=rss&utm_medium=rss&utm_campaign=dockerize-your-nextjs-app-like-a-pro-advanced-tips-for-next-level-optimization https://prodsens.live/2023/04/09/dockerize-your-nextjs-app-like-a-pro-advanced-tips-for-next-level-optimization/#respond Sun, 09 Apr 2023 09:03:19 +0000 https://prodsens.live/2023/04/09/dockerize-your-nextjs-app-like-a-pro-advanced-tips-for-next-level-optimization/ dockerize-your-nextjs-app-like-a-pro:-advanced-tips-for-next-level-optimization

Creating a new application with your team and having it up and going on your team’s system can…

The post Dockerize Your Nextjs App Like a Pro: Advanced Tips for Next-Level Optimization appeared first on ProdSens.live.

]]>
dockerize-your-nextjs-app-like-a-pro:-advanced-tips-for-next-level-optimization

Creating a new application with your team and having it up and going on your team’s system can be difficult because you need your app to run in different environments and yet behave the same way.

Docker, thankfully, is a great solution that streamlines the process. It guarantees that your app will function the same regardless of where it is distributed by creating a bundle containing your software code along with its dependencies and configurations. This allows you to concentrate on developing the most effective software(app) possible without being sidetracked by technological issues.

Moreover, to deploy your app to platforms like Fly.io, you need to containerize your app using Docker.

And that’s where this guide will help you.

In this guide, we will walk you through the process of creating a Docker image of your Next.js application and then we’ll look at how to optimize it. But before that, let’s look at the advantages of using Docker.

Benefits of Using Docker for App Deployment

As a developer, you are aware of how critical application deployment is.

The deployment process, however, can be difficult since there are so many things to take into account, such as compatibility, scalability, portability, and rollback capabilities.

This is where Docker comes into play.

As discussed above as well, Docker makes deployment easier by integrating your application’s dependencies into a single container, which provides various benefits for developers.

Improved scalability is a fundamental benefit of using Docker for deployment. You can rapidly grow your application horizontally using Docker by running several instances of your container in different servers, allowing it to handle increased traffic without affecting performance.

A container is a standalone application created from your Docker image.

By offering a consistent environment independent of the host system, Docker also makes portability easier. This implies that you can easily deploy your application across several environments, including development and production.

Making use of Docker also facilitates updates and rollbacks. Every time you update your application, all you have to do is generate a new Docker image and re-run your containers with the latest Docker image.

You can also easily roll back to an earlier version of the app by deploying an older Docker image if the current version of the app produces difficulties.

Further advantages of Docker include increased security, inexpensive infrastructure, and simpler developer collaboration. You can learn more about that by clicking here.

Now that we’ve learned about the benefits of using Docker, it’s time to install the prerequisites.

Essential Prerequisites for Creating a Docker Image

To ensure a smooth creation of the Docker image, there are a few prerequisites that need to be met:

  1. Node.js and NPM must be installed on your system.
  2. Docker needs to be installed as well.

Once these requirements are fulfilled, we can proceed with creating the Next.js app.

Creating a Blog Website

For this tutorial, it’s not crucial to create a new blog or e-commerce Next.js application from scratch. Instead, installing a starter app from Vercel can be a more efficient approach.

To do this, simply run the following command in your terminal on your preferred directory:

npx create-next-app --example blog-starter blog-starter-app

It will create a Next.js blog app.

To deploy your application, you must first build it based on the command specified in your package.json file. You can build a Next.js by running the following command:

npm run build

By executing this command, a “.next” folder will be generated, containing all the files and assets defined in your Next.js application. The logs will also display the pages that were generated.

To run the Next.js app, execute the following command:

npm start

After the server is ready, go to localhost:3000 to see the output. You should see a blog app similar to this. It should be noted that the scripts used here are defined by default in the “package.json” file.

Writing a Dockerfile

Before we begin, let us first define the terms Container and Image, and why we are building a Dockerfile.

An image, or a pre-built package containing the application code and its dependencies, is required to execute within a container. To construct an image, you’ll require a Dockerfile, which is a set of instructions that instructs Docker how to build the image.

Docker, in a nutshell, allows developers to simply generate, distribute, and deploy images, resulting in faster development cycles and simpler application management.

With that stated, let’s make a “Dockerfile” in our root directory and paste the following content within it.

FROM node:16-alpine
RUN mkdir -p /app
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]

Here, we defined a base image i.e. node:16-alpine.

What exactly does it do? It will download the node:16-alpine image from the Docker Hub (with Node. js, npm, and other necessary tools pre-installed).

Afterward, as per the instruction, it creates a directory and sets it as a working directory, then copies the application files into the working directory.

The “RUN npm install” and “RUN npm run build” lines install dependencies & build the application. Finally, we expose port 3000, and start the application using the command “npm start”.

This was our configuration stored inside the Dockerfile.

Now, let’s build an image using this.

docker build -t blog-website-1 ./

We tagged(or named) this image as “blog-website-1” and then specified the directory where the Dockerfile is located, which is the root directory for us.

Logs

As you can see from the logs, it is retrieving the base image and then executing everything specified in the Dockerfile.

After the process finishes, it will generate a docker image, which you can see using the “docker images” command.

Docker Image

It’s 622MB in size. For sure, in the next section, we’ll see how you can reduce this number drastically.

To run this image within a container (specifying the port as 3000) use the following command.

docker run -p 3000:3000 blog-website-1

You can now go to localhost:3000 and see your app running, but this time from the Docker container.

Optimizing the Docker Image

Previously, we created a simple Dockerfile and provided some basic instructions to create an image that can be run inside a container.

However, there is room for further optimization as the current image size is 622MB which is not ideal for production apps. There are various methods to optimize the image size, and we will focus on a simple approach based on the factors listed below.

  • Pick a different smaller base image(it’s not always the case): You can use a smaller base image to reduce the image size.
  • Combine commands: You may combine the “RUN npm install” and “RUN npm run build” instructions into a single “RUN npm ci -quiet && npm run build” command. This reduces the number of layers in the image and its size.
  • Use multi-stage builds: You can divide your Dockerfile into two stages, with the first stage building your app and in the second stage you can copy only the files required. This will reduce redundant files and environments that we created in the first stage.

Here, we will primarily use multi-stage builds to demonstrate how we can easily reduce the size to roughly 100 MB.

Let’s proceed with the optimization process by replacing the contents of the existing Dockerfile with the following instructions.

# Build Stage
FROM node:16-alpine AS BUILD_IMAGE
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build


# Production Stage
FROM node:16-alpine AS PRODUCTION_STAGE
WORKDIR /app
COPY --from=BUILD_IMAGE /app/package*.json ./
COPY --from=BUILD_IMAGE /app/.next ./.next
COPY --from=BUILD_IMAGE /app/public ./public
COPY --from=BUILD_IMAGE /app/node_modules ./node_modules
ENV NODE_ENV=production
EXPOSE 3000
CMD ["npm", "start"]

We have divided our Dockerfile file into two sections i.e. “Build Stage” and “Production Stage”. And the commands are pretty self-explanatory and straightforward.

For the build stage, the commands are similar to the Dockerfile we created earlier. On the other hand, in the production stage, we are just copying the files that we need from the build stage and running the app.

Let’s build a new app with the new Dockerfile and name this image “blog-website-2”.

docker build -t blog-website-2 ./

And as you can see from the command “docker images”, the second image saved around 60MB.

Docker Image

You can run this image by running the command “docker run -p 3000:3000 blog-website-2” as well, and you will get the same blog website.

Taking Docker Image Optimization to the Next Level

As we can see, even after optimizing images with the help of multi-stage builds, we don’t see significant image optimization because smaller Docker images are easier to deploy and scale.

This is why we will be exploring other ways to further optimize our image size. For that, create a “next.config.js” file in the root directory and paste the below code.

/**
* @type {import('next').NextConfig}
*/

const nextConfig = {
   experimental: {
       outputStandalone: true,
   }
}
module.exports = nextConfig

According to the documentation, it will create a folder at “.next/standalone” which can then be deployed on its own without installing “node_modules”. It is also one of the most effective methods for optimizing the docker file. You can learn more about it here.

Let’s modify the Dockerfile now.

FROM node:18-alpine as builder
WORKDIR /my-space

COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:18-alpine as runner
WORKDIR /my-space
COPY --from=builder /my-space/package.json .
COPY --from=builder /my-space/package-lock.json .
COPY --from=builder /my-space/next.config.js ./
COPY --from=builder /my-space/public ./public
COPY --from=builder /my-space/.next/standalone ./
COPY --from=builder /my-space/.next/static ./.next/static
EXPOSE 3000
ENTRYPOINT ["npm", "start"]

The code in this section is mostly identical to the Dockerfile we created earlier using a multi-stage build.

As you can see, we did not use node_modules here, but rather the standalone folder, which will optimize the image to a greater extent.

Let’s build a new app with the new Dockerfile and name this image “blog-website-3”.

docker build -t blog-website-3 ./

The resulting image size has now been reduced to 187MB.

Docker Image

What Really Matters for Docker Image Disk Usage

In the earlier section on optimizing Docker images, we explored many aspects that might influence image size, such as using a smaller base image. However, some sources, such as Semaphore, claim that the size of the base image is irrelevant in some cases.

Instead of the size of the base image, the size of frequently changing layers is the most important factor influencing disk usage. Because Docker images are composed of layers that may be reused by other images, the link between size and disk usage is not always clear.

Two images with the same layers, for example, might have drastically different disk consumption if one of the layers changes often. As a result, shrinking the base image may be ineffective in saving disc space and may restrict functionality.

If you wish to go more into the topic, you may do so here.

Is Docker Right for You? Here’s When to Consider It

Docker, as we’ve seen, is a great tool for managing software dependencies and providing consistent environments.

Knowing when to use Docker and when to create a Docker image is crucial in achieving the benefits of this platform.

Docker is ideal for:

  • An Isolated environment (for creating and testing apps)
  • Deployment purposes
  • Scalability
  • Continuous Integration and Deployment (CI/CD)

It goes without saying that if you’re using Docker, you’ll need Docker images as well.

So, whether you require an isolated environment, want to deploy apps reliably and consistently, or want to ensure consistency across different environments, Docker can help.

While Docker helps with deployment and scaling, you still need a front-end ready to be deployed and turned into a full-stack app. For this, you can use the Locofy.ai plugin to generate modular, and highly extensible Next.js apps directly from your Figma & Adobe XD design files.

You can use the auto layout feature on Figma to make your designs responsive on the Locofy.ai plugin and even if your designs don’t utilise auto layouts, the plugin offers a Design Optimizer feature that uses AI to apply auto layouts to your design files.

Once your designs are responsive, you can use the Auto Components feature to split your design elements into working React components, making them easy to extend.

Hope you like it.

That’s it — thanks.

The post Dockerize Your Nextjs App Like a Pro: Advanced Tips for Next-Level Optimization appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/04/09/dockerize-your-nextjs-app-like-a-pro-advanced-tips-for-next-level-optimization/feed/ 0
What was your win this week? https://prodsens.live/2023/04/07/what-was-your-win-this-week-7/?utm_source=rss&utm_medium=rss&utm_campaign=what-was-your-win-this-week-7 https://prodsens.live/2023/04/07/what-was-your-win-this-week-7/#respond Fri, 07 Apr 2023 15:03:59 +0000 https://prodsens.live/2023/04/07/what-was-your-win-this-week-7/ what-was-your-win-this-week?

Hey folks! 👋 Hope y’all all have wonderful weekends. 😀 Looking back on this past week, what was…

The post What was your win this week? appeared first on ProdSens.live.

]]>
what-was-your-win-this-week?

Hey folks! 👋

Hope y’all all have wonderful weekends. 😀

Looking back on this past week, what was something you were proud of accomplishing?

All wins count — big or small 🎉

Examples of ‘wins’ include:

  • Starting a new project
  • Fixing a tricky bug
  • Going on a nice walk with the dog 🦮

A french bulldog getting walked, or rather dragged, in the snowy city streets

The post What was your win this week? appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/04/07/what-was-your-win-this-week-7/feed/ 0
How to Get a Job as a Digital Project Manager https://prodsens.live/2023/04/03/how-to-get-your-foot-in-the-door-as-a-dpm/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-get-your-foot-in-the-door-as-a-dpm https://prodsens.live/2023/04/03/how-to-get-your-foot-in-the-door-as-a-dpm/#respond Mon, 03 Apr 2023 09:06:13 +0000 https://prodsens.live/2023/04/03/how-to-get-your-foot-in-the-door-as-a-dpm/ how-to-get-a-job-as-a-digital-project-manager

Holly Davis, Digital Project Manager for White October At the Digital PM Summit in Philadelphia, a conference full…

The post How to Get a Job as a Digital Project Manager appeared first on ProdSens.live.

]]>
how-to-get-a-job-as-a-digital-project-manager

Holly Davis, Digital Project Manager for White October
Holly Davis, Digital Project Manager for White October

At the Digital PM Summit in Philadelphia, a conference full of digital project managers (DPMs), it was interesting to hear about the various ways DPMs have found their way into a career in project management.

Listening to some of the panels, talks and in conversations with the DPM community, I heard that many, like me, have fallen into the career by accident.

It’s often because they have gained the transferable skills found in most good PMs, the combination of soft skills, organization and a natural aptitude to get things done.

But the profession is growing up and new entry paths are emerging. The most exciting of these being the emergence of apprenticeship and the rise in internships and work experience placements both in the UK and US.

Scrum Certifications are also increasingly in demand as agencies begin to explore a hybrid of agile and waterfall methodologies.

Dave Prior, agile consultant for Leading Agile ran an agile retrospective at the Summit with 100 attendees, 63% of which claimed to be practicing both scrum and waterfall in the same organization.

As was clear from the Philadelphia Summit, what works for one person or agency might not work for another.

Post-conference I caught up with some DPMs and agencies to see what they had to say.

1. Consider getting certified

Certifications can open doors, according to Carson Pierce, PMP and CSM Certified and works at DDB.

“Can you be a PM without a PMP? Of course. Most people do. Will you be a better PM with it? Of course – all knowledge is valuable.

A lot of people poo-poo the PMP certification, but it’s been huge for me. I understand the concepts behind the things we do and the software we use. If I talk to someone in a different field, we can talk the same language.

And those three little letters have gotten me a lot of respect from clients, which is really helpful in building trust quickly.”

Another certification to consider is the Google Project Management certificate, which is an entry-level alternative if you don’t meet the eligibility criteria for PMI’s Project Management Professional exam.

Recommended


Google Project Management Certificate


5.0

A solid, professional, well-recognized project management course from a great training provider. Perfect for beginners and people interested in learning more about project management as a career.


Learn more

We earn a commission if you click this link and make a purchase, at no additional cost to you #ad

2. Learn the skills on the job

Mel Wilson, Project Manager at Incuna had the following to say.

“After graduating from university, I decided to pursue a career in project management. Unfortunately, the majority of vacancies were looking for at least 1-2 years experience which I didn’t have.

I therefore decided to interview for a support admin position at White October – one of the leading agencies in Oxford. Although this wasn’t exactly the job I wanted to be in eventually, it provided me with great exposure to the workings of an agency and all of its functions.

Within a year, I had learnt the basics of project management and was able to quickly prove myself as a DPM. The support admin was just a stepping stone for me, 18 months on I’m now where I want to be. I’d recommend this route to others.”

3. Dip your toe in the water with an apprenticeship or internship

Anna Lewis, Senior Recruiter, Viget

“We recognize that certifications in certain industries and environments can be useful, but they don’t usually prepare someone for a career at Viget.

They don’t prepare people for what PMs actually do day-to-day in our agency setting. We’re looking for individuals who are smart, detail-oriented, unflappable problem solvers — certifications can’t tell us whether someone has those skills and qualities the way we think a 10 or 12-week apprenticeship scheme can.

Most applicants for our apprenticeship have no previous experience of project management. When we embark on an apprenticeship with an individual, there are questions for both Viget as a company and for the apprentice: will they be a good fit, will they actually like the job, what are their strengths.

For Viget, an apprenticeship seems to be a great way to start answering those questions.

At the end of the apprenticeship, if an apprentice has developed an informed perspective on what it’s like to work in our industry, what the project management job requires, and whether it’s right for them, then we think that’s really positive and valuable.”

If you can’t find a scheme like this where you live, contact an agency you aspire to work for and ask if they’d be interested in doing something like this at a smaller scale.

Shadow days where you shadow a project manager for a day a week can be an incredibly useful introduction to a career in project management.

Parting words

The face of the profession is continually evolving. This time next year there will be new standards, processes, tools and with that new opportunities for people wanting to pursue a career in project management

As you can see, the options are vast, so if you’re feeling overwhelmed, start with doing some research. Think about the type of projects and companies you might like to work for.

The way companies manage projects can vary massively, do your homework, find out what they look for in a potential employee and then map out which of the options outlined above might help you get there!

A version of this article first appeared in 2015.

This article first appeared at Rebel’s Guide to Project Management

The post How to Get a Job as a Digital Project Manager appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/04/03/how-to-get-your-foot-in-the-door-as-a-dpm/feed/ 0