Neomi, Author at ProdSens.live https://prodsens.live/author/neomi/ News for Project Managers - PMI Thu, 25 Apr 2024 14:20:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Neomi, Author at ProdSens.live https://prodsens.live/author/neomi/ 32 32 AWS Credentials for Serverless https://prodsens.live/2024/04/25/aws-credentials-for-serverless/?utm_source=rss&utm_medium=rss&utm_campaign=aws-credentials-for-serverless https://prodsens.live/2024/04/25/aws-credentials-for-serverless/#respond Thu, 25 Apr 2024 14:20:27 +0000 https://prodsens.live/2024/04/25/aws-credentials-for-serverless/ aws-credentials-for-serverless

AWS is a zero-trust platform1. That is, every call to AWS must provide credentials so that the caller…

The post AWS Credentials for Serverless appeared first on ProdSens.live.

]]>
aws-credentials-for-serverless

AWS is a zero-trust platform1. That is, every call to AWS must provide credentials so that the caller can be validated and her authorization checked. How one manages these credentials will vary depending on the execution environment. A developer, who gets his workstation set up with his own AWS credentials, will often find that the application he is building cannot (and should not) consume credentials in the same way. Why do these differences exist? And what should he do to manage credentials?

For credentials on your workstation, AWS recommends using IAM Identity Center SSO. This lets you verify your identity (often with a standard, non-AWS identity provider like Google or Okta), and then that user can assume an IAM role to provide a set of temporary credentials. This works well and is fairly secure, especially if your identity provider is set up with multi-factor authentication (MFA). Because the AWS credentials are short-lived, even if they leak out, they expire quickly thus limiting exposure. Why can’t we take this approach with the applications we are building?

We want to have the application assume a role and pick up short-term credentials. However, we can’t use the workstation approach because we need user authentication (SSO/MFA) to establish the user, and that’s not possible at the application’s runtime. To get out of this jam, we can rely on the fact that all our application runtimes are serverless and will happen within an AWS service (in our case Lambda or Fargate). This establishes sufficient trust such that we can assign the execution environment a role and let it obtain short-term credentials.

In this article, I want to examine how your application running in either Lambda or Fargate ought to get its AWS credentials. We’ll discuss how the AWS SDKs use credential provider chains to establish precedence and one corner case I found myself in. Let’s dig in.

Credential Sources

As mentioned earlier, you can provide credentials to your application in several ways including (but not limited to) environment variables, config/credentials files, SSO through IAM Identity Center, instance metadata, and (don’t do this) directly from code. Irrespective of which method you choose, you can allow the AWS SDK to automatically grab credentials via its built-in “credentials provider.”

The mechanism for selecting the credential type is called the “credential provider” and the built-in precedence (i.e., the order in which it checks credential sources) is called the “credential provider chain.” This is language agnostic. Per AWS documentation, “All the AWS SDKs have a series of places (or sources) that they check to get valid credentials to use to make a request to an AWS service.” And, once they locate any credentials, the chain stops and the credentials are used.

For the NodeJS SDK, that precedence is generally:

  1. Explicit credentials in code (again, please don’t do this)
  2. Environment Variables
  3. Shared config/credentials file
  4. Task IAM Role for ECS/Fargate
  5. Instance Metadata Service (IMDS) for EC2

So, we can pass in credentials in many ways. Why should we choose one over another? Each approach varies in terms of security and ease of use. Fortunately, AWS allows us to easily set up our credentials without compromising security. We are going to focus on two recommended approaches: environment variables (for Lambda) and the task IAM role (for Fargate)

Environment Variable Credentials

Credentials in environment variables are, perhaps, the easiest way to configure your AWS SDK. They are near the top of the precedence list and will get scooped up automatically when you instantiate your SDK. For example, if you set the following environment variables in your runtime:

export AWS_ACCESS_KEY_ID="AKIA1234567890"
export AWS_SECRET_ACCESS_KEY="ABCDEFGH12345678"

Then when you instantiate your AWS SDK, these credentials will get loaded automatically, like so:

import { DynamoDBClient } from '@aws-sdk/client-dynamodb'
const client = new DynamoDBClient({ region: 'us-east-1' }); // Loads env credentials

Note that the AWS_ACCESS_KEY_ID begins with “AKIA”. This signifies that this is a long-term access key with no expiration. These types of keys are attached to an IAM user or, if you are reckless, the AWS account root user2.

Alternatively, you may run across AWS credentials that look like the following:

AWS_ACCESS_KEY_ID=ASIA1234567890
AWS_SECRET_ACCESS_KEY=ABCDEFGH12345678
AWS_SESSION_TOKEN=XYZ+ReallyLongString==

These credentials are short-lived. You can tell this both by the presence of the AWS_SESSION_TOKEN and that the AWS_ACCESS_KEY_ID begins with “ASIA” instead of “AKIA”.

When you use a credential provider, it consumes the Access Key, Secret, and Session Token. These tokens can be set to expire anywhere from 15 minutes to 12 hours from issuance. This would be a drag if you had to repeatedly go fetch these short-lived tokens and save them so your application can use them. Fortunately, you don’t have to. Both Lambda and ECS offer built-in mechanics to provide your application with short-term credentials. Let’s start with Lambda.

Using Credentials in Lambda

Add the following line to one of your Lambdas:

console.log(process.env);

And you’ll see AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN. How did they get there? AWS added them for you using the (required) IAM role attached to your Lambda. During initialization, AWS calls out to their Secure Token Service (STS), obtains short-term credentials, and then conveniently injects those credentials into your Lambda’s environment variables.

Lambdas are special in this regard. You (or the credentials provider) don’t have to do anything extra to fetch credentials, they are just there for you to use. Why?

Lambdas are short-lived. Even under constant use, every hour or so they are automatically recycled. This means that a single short-lived token can serve the Lambda that is using it; no re-fetching of tokens is necessary. For example, if AWS sets Lambdas to last no more than an hour before being decommissioned, it can set the expiration for the access token to just over 60 minutes and the application using the token will never need to fetch another.

Having your credentials provider automatically find and use the credentials in Lambda’s environment variables is both the recommended and easiest approach. This is a true win/win.

Using Credentials in Fargate

ECS Fargate shares many traits with Lambda: they’re both managed by AWS (in neither case are we taking care of the underlying servers), they scale up and down automatically, and each can have an IAM role that provides permissions for the application’s runtime.

However, Fargate containers don’t automatically recycle. They are relatively long-lived when compared to Lambda and can easily live longer than the maximum STS token expiration. This means the method used by Lambda to inject the STS tokens into the runtime environment won’t work.

Instead, you can use the–optional but recommended–Task Role ARN property of your ECS task definition to specify the permissions you would like your task to have. Then your credentials provider can assume this role to obtain short-term credentials it can use. It manages this for you and you don’t have to do anything but set the TaskRoleArn in your task definition.

Why You Should Know This

The AWS SDK’s credentials provider doesn’t know “I’m in a Lambda” or “I’m in Fargate.” When invoked, the SDK will use the default credentials provider to step through a chain of locations to look for credentials and it will stop as soon as it finds one. This means things often “just work.” But, it also means you can short-circuit the precedence chain if you are not careful (or you can do it purposefully; I’ll give an example later).

If you are using Lambda, and you new up an SDK client like this:

const client = new DynamoDBClient({
  region: 'us-east-1',
  credentials: {
    accessKeyId: 'ABC', // Don't do this
    seretAccessKey: '123', // Don't do this, either
  },
});

your credentials provider will never check the environment variables for credentials and will run with what you gave it.

Likewise, in Fargate, if you either pass in direct credentials or set environment variables of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, your credentials provider will never use your TaskRoleArn. This can be confusing if you are not used to it.

Breaking the Chain on Purpose

I was working with a client on a container migration project, where they needed to move their container workloads from Kubernetes (K8) on EC2 over to Fargate on ECS. At one point during the transition, the same container needed to be simultaneously running in both places. I knew I wanted to use the TaskRoleArn in Fargate, but that would not fly in the K8 deployment as it would grab the credentials from the EC2 instance on which it ran. And, since that EC2 instance served many disparate K8 pods, it was a poor3 place to manage the runtime permissions of the containers underneath it.

The K8 configuration had environment variables set to long-term credentials for an IAM user. At first, the ECS task definition just used the same credentials (from env vars). Then, we created a dedicated IAM role for the task and attached it to the definition as a TaskRoleArn. OK, time for a quick quiz:

What happens now? The ECS container will:
A) Use the IAM role from TaskRoleArn.
B) Use the environment variable credentials.
C) Throw a ConflictingCredentialsError.

The correct answer is B. As long as those environment variable credentials are present, the credentials provider will stop looking after discovering them. During the migration, we used this to our advantage as we kept the code the same and just modified the configuration based on the destination (environment variable credentials in K8, none in Fargate). Eventually, we were only using the TaskRoleArn and we could retire those long-term credentials and the environment variables that surfaced them.

What Can Go Wrong?

Long-term credentials pose a real risk of leaking. AWS advises its users to take advantage of SSO and IAM roles for their user and application runtimes, respectively. I know an engineer who inadvertently checked in a hard-coded AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY into a public GitHub repository. Within minutes, they had been scraped, and expensive BitCoin miners were deployed in far-away regions of his company’s AWS account (kudos to AWS for expunging those actions from their AWS bill the following month).

The engineer had thought the repository was private. However, the fundamental issue was using hard-coded, long-lived credentials in the first place. Using fully managed, serverless architectures like Lambda and Fargate along with role-based, short-term credentials, you can avoid this kind of headache.

Further Reading

AWS Documentation: Setting credentials in Node.js
AWS CLI User Guide: Configuration and credential file settings
Amazon ECS Developer Guide: Task IAM role
Ownership Matters: Zero Trust Serverless on AWS
Gurjot Singh: AWS Access Keys – AKIA vs ASIA
Nick Jones: AWS Access Keys – A Reference

  1. I have written on zero trust here↩

  2. Never give programmatic access to your root user. ↩

  3. Least Privilege would be off the table. ↩

The post AWS Credentials for Serverless appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/04/25/aws-credentials-for-serverless/feed/ 0
What’s New in API7 Enterprise 3.2.9: Custom Plugin Management https://prodsens.live/2024/04/10/whats-new-in-api7-enterprise-3-2-9-custom-plugin-management/?utm_source=rss&utm_medium=rss&utm_campaign=whats-new-in-api7-enterprise-3-2-9-custom-plugin-management https://prodsens.live/2024/04/10/whats-new-in-api7-enterprise-3-2-9-custom-plugin-management/#respond Wed, 10 Apr 2024 11:20:48 +0000 https://prodsens.live/2024/04/10/whats-new-in-api7-enterprise-3-2-9-custom-plugin-management/ what’s-new-in-api7-enterprise-32.9:-custom-plugin-management

Introduction To meet the personalized and efficient API management needs of enterprises, API7 Enterprise has carefully introduced the…

The post What’s New in API7 Enterprise 3.2.9: Custom Plugin Management appeared first on ProdSens.live.

]]>
what’s-new-in-api7-enterprise-32.9:-custom-plugin-management

Introduction

To meet the personalized and efficient API management needs of enterprises, API7 Enterprise has carefully introduced the custom plugin management feature. Through custom plugins, enterprises and developers can precisely extend the functionality of the API gateway according to business requirements, effectively addressing diverse business scenarios, and thus enhancing development efficiency and system flexibility.

Feature Overview

1. Concept of Custom Plugin Management

In the API7 Enterprise’s dashboard, users can easily upload or delete custom plugins and integrate them with ADC (APISIX Declarative CLI). The source code management of these plugins is organized at the organizational level. Once uploaded, all gateway groups and services can directly reference them, greatly enhancing the flexibility and efficiency of API management.

2. Plugin Usage Rules

Regarding the usage of plugins, we have established a set of rigorous rules. Custom plugins are only issued when referenced by a service and first published to a specific gateway group. This design ensures precise deployment and efficient utilization of plugins. Additionally, to ensure system stability and security, users must ensure that no service is using a custom plugin before deleting it.

3. Access Control

Access control for the custom plugin management feature adopts a strict RBAC (Role-Based Access Control) mechanism. Super Admin has the highest authority, capable of viewing and editing all plugins; the API Provider can only view plugin information; while Runtime Admin and Viewer can only perform viewing operations. Such permission settings ensure that users with different roles can only execute operations they are authorized to, effectively maintaining system security and stability.

Plugins

Usage Guidelines

1. Custom Plugin Development

The plugin development process encompasses requirements analysis, design planning, coding implementation, and comprehensive testing. Developers need to design the functionality and interfaces of plugins according to actual requirements, write code, and conduct thorough testing to ensure the stability and reliability of plugins. For a more in-depth understanding of plugin development steps, you can refer to this blog to build a plugin from 0 to 1.

2. Custom Plugin Upload, Editing, and Reference

Super Admin has the privilege to upload custom plugins in gateway settings. During the upload process, the system conducts security checks on plugins to ensure no potential risks.

Upload custom plugins

Upon uploading a plugin, users can provide the following information: plugin category, detailed description, relevant documentation link, and author name. The plugin’s name and version will be automatically parsed by the system, aiding other users in understanding and utilizing the custom plugin effectively. These details serve as crucial clues for issue tracing and resolution.

Add custom plugins

Uploaded plugins appear in both the custom plugin list and the pending plugin list for Service/Route/Global rules, facilitating easy reference for other users. Editing plugins is unrestricted, allowing immediate effect on changes.

Edit plugins

3. Custom Plugin Deletion

API Providers can easily add and select custom plugins in the Service Template, flexibly applying them to specific API services. When a plugin is deleted, the system synchronously removes all relevant references from service templates or history services using that plugin, ensuring data consistency and integrity. This design not only simplifies the operation process but also effectively avoids data chaos caused by misoperations.

Conclusion

The introduction of custom plugin management enhances the flexibility and extensibility of API7 Enterprise. This innovative feature empowers enterprises to customize and integrate plugins according to their specific business needs, thereby better addressing particular business scenarios. With custom plugins, enterprises can seamlessly extend the functionality of API7 Enterprise, achieving more refined management and efficient operational processes.

The post What’s New in API7 Enterprise 3.2.9: Custom Plugin Management appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/04/10/whats-new-in-api7-enterprise-3-2-9-custom-plugin-management/feed/ 0
70 AI Prompt Examples for Marketers to Use in 2024 https://prodsens.live/2024/04/10/70-ai-prompt-examples-for-marketers-to-use-in-2024/?utm_source=rss&utm_medium=rss&utm_campaign=70-ai-prompt-examples-for-marketers-to-use-in-2024 https://prodsens.live/2024/04/10/70-ai-prompt-examples-for-marketers-to-use-in-2024/#respond Wed, 10 Apr 2024 11:20:39 +0000 https://prodsens.live/2024/04/10/70-ai-prompt-examples-for-marketers-to-use-in-2024/ 70-ai-prompt-examples-for-marketers-to-use-in-2024

Garbage in, garbage out. While that’s certainly true of coding, that now applies to marketers who want to…

The post 70 AI Prompt Examples for Marketers to Use in 2024 appeared first on ProdSens.live.

]]>
70-ai-prompt-examples-for-marketers-to-use-in-2024

Garbage in, garbage out. While that’s certainly true of coding, that now applies to marketers who want to make the most of AI. I’ve written dozens of iterations of the same prompt, refining my query until I could strike the right balance.

When used right, AI saves me time on routine tasks and sparks my thinking. I can then focus on sprucing up the bot’s output or funnel my energies to the more creative, engaging parts of my job.

However, the right prompts are essential to make the most of AI’s capabilities.

In this article, I’ll share examples of AI prompts marketers that marketers can use to make the job easier. We’ll also share essential data on how marketers are using AI today.

What we’ll cover:

Download Now: 5 Essential Resources for Using ChatGPT at Work [Free Kit]

How do AI prompts work?

All AI tools share a commonality: “great prompt = great output.”

A great AI prompt is specific, straightforward, filled with relevant information, and uses complete sentences. If your AI prompts deviate from the above qualities, the odds of getting unusable responses increase.

AI Prompts in Marketing Today

Our recent State of AI survey shows that hundreds of marketers benefit from artificial intelligence solutions. We surveyed 1,350 U.S. business professionals.

Marketers from our survey found that AI helped their teams automate manual tasks, save time, create personalized content, and better understand customer needs. That all relies on writing prompts that are specific and clear.

ai prompt data

We asked marketers for their most effective when writing prompts for general AI. Of respondents, 53% suggested offering relevant context or background information. That includes specifying the target audience, describing the themes to cover, and providing additional notes.

Other best practices included using follow-up prompts to expand on previous outputs (43%) and providing specific prompts (45%). Another 55% recommended experimenting with different prompts to see what works best for your specific use case.

ai prompt data

When using generative AI to write copy, the majority of marketers (51%) needed to write three prompts in order to achieve the desired result. When writing messages, 63% of respondents said they only needed to make minor edits to the text.

So, the prompts you use make all the difference. To learn more about how marketers leverage AI, download the State of AI report.

Marketers use AI for more than one purpose. They can use it to brainstorm entire processes or series if done correctly. So, as you find inspiration for your AI prompts, ‌try them out in HubSpot’s Content Assistant.

ai prompts, hubspot content assistantJoin the waitlist for HubSpot’s Content Assistant today

This content assistant tool natively integrates with the HubSpot products you know and love, allowing you to toggle between manual and AI content creation to generate copy for blogs, emails, and more.

Now, let’s explore the different prompts you can use for your marketing strategy.

Marketing AI Prompt Examples

Examples of AI Prompts for Marketers

ai prompt examples

Educational Prompts

These prompts are useful for writing drafts of top-of-the-funnel content about popular topics. Here are some examples:

1. What is [topic]? Write a blog post of [number] words introducing the reader to [topic].

2. Briefly explain the stages of the [topic].

3. List the key elements of effective [topic].

4. What is the difference between [topic 1] and [topic 2]?

5. Outline how [topic] trends have influenced [another topic].

Informative Prompts

Informative prompts let you generate content that offers valuable insights to readers on a topic. Here are a few examples:

6. Create content for our help page that explains how [popular software feature] works.

7. Explain what [your company] can learn from [competitor] optimization of its user experience.

8. What are some popular myths about [topic]? Write a strong essay under 1,000 words that dispels all myths.

Listicle Prompts

These prompts help you outline ideas and create drafts for a list blog post or social media post. See some examples below.

9. List [number] must-have tools for beginner [topic] enthusiasts.

10. List [number] blog post titles on the benefit of an effective [topic].

11. List the major themes in our recent customer review below: [review].

12. List [number] common misconceptions about [topic] and debunk them.

13. List [number] frequently asked customer questions about our [topic]. Provide answers under 100 words to each question.

 

Technical Prompts

AI tools help write drafts of technical materials. Below are some technical AI prompts.

14. Write a [user manual] for [product feature] that guides users through its use.

15. Attached is the raw data of a survey we conducted. Our company’s name is [name]. We surveyed [user groups]. Analyze the survey data and outline the key findings.

16. Create a business proposal for a new content management system in a hypothetical company. Address costs, timelines, and expected benefits.

Art AI Prompts

Creating great art with AI is both a science and an art. Before creating an art prompt, you need to set up an account with tools like Midjourney. Here’s how an AI expert, Ruben Hassid, recommends you do this:

1. Open Midjourney and Discord accounts.

  • Google Midjourney.
  • Click Join Beta.
  • Create a Discord account.
  • Subscribe to any of their plans.

2. Use Midjourney.

  • Invite Midjourney to your channel.
  • Start a prompt with “/imagine.”
  • Use descriptive words and techniques.
  • Select the best variation out of 4.
  • Upscale it or create variations of it.

3. Upscale the image or create variations.

U = Upscale = Make an image bigger.

V = Variation = 4 new images based on that one.

U1 = Upscale the top left image.

U2 = Upscale the top right image.

U3 = Upscale the bottom left image.

U4 = Upscale the bottom right image.

V1 = Create 4 variations from the top left image.

V2 = Create 4 variations from the top right image.

V3 = Create 4 variations from the bottom left image.

V4 = Create 4 variations from the bottom right image.

AI prompt examples, using AI to create art

Examples of AI art prompts

17. An image for a [content type] showing a researcher engrossed in their work.

18. An image of a bold [color] lady for a web page. Lady should wear a jacket, look forward, smile, have dark hair, fold her hands, and stand in a library setting.

19. An image of nine professionals in a Zoom call setting. Blur the images a bit. Place the image of a [color] man in front of the image. The man should have a bold, bright smile and should be in a suit.

20. Image of cartoon researching with their computer. A ghost caricature behind the cartoon shows the researcher is a ghostwriter.

Examples of AI Prompts for Lead Generation

Lead generation is attracting prospects to your business and increasing their interest in becoming customers.

AI can empower marketers to attract more potential customers based on buyer persona characteristics if specified in the AI prompt. The following examples showcase how to get those customized results.

21. Generate ideas for a new product launch in [month] that incorporate the theme of [season] and [tone].

22. Brainstorm content ideas for a blog post about [topic] in [number] of words or fewer that is search engine optimized in formatting using H2s and H3s accordingly.

23. Suggest high-volume keyword clusters for [topic] to optimize search engine rankings.

24. Identify popular trends in the industry of [product or service] that an audience of [target audience] will be interested in this [upcoming season].

25. Generate ideas for an upcoming marketing campaign about [new product] with a marketing mix comprising [product] [price] [place] [promotion channels].

26. Suggest [number] ways to improve website traffic during [holiday season].

27. Identify potential target audiences in [location] that would be interested in buying [product] to solve [pain point].

28. Suggest new strategies for lead generation in [market] and [industry].

29. Generate ideas for creating a viral social media campaign using recent [social media platform] trending audios or popular memes from [month] [year].

30. Identify new channels for advertising [product] aside from [current platforms already in use].

Examples of AI Prompts for Social Media Posts

Did you know that AI can recognize different social media platforms? Marketers benefit from using AI prompts for their preferred channels instead of basing strategy on generalizations.

Here are some excellent examples to follow for social media drafts.

AI prompt example for social media post

Image Source

31. Write a tweet promoting a new product suited for a target audience in [industry] and [location].

32. Generate a post for Instagram featuring a customer testimonial about [product] in under [number] words.

33. Write a Facebook post introducing a new product feature and rephrase its current description to sound more exciting and effective: [insert current product description text].

34. Create a LinkedIn post promoting a new job opening in [number] words or less with a strong call-to-action at the end.

35. Draft a Pinterest post featuring a new product line and provide tips on improving product photography for [type of aesthetic].

36. Write a YouTube video description for a new product review that links to [insert links] for viewers to go to the product landing page for more information.

37. Draft a TikTok video script showcasing a product demonstration for 2 minutes at maximum.

38. Create a Snapchat story promoting a limited-time offer and describe the type of stickers or filters that can improve it.

39. Write a blog post title to promote a new social media campaign in [number] characters or less.

40. Draft an email subject line to promote a new blog post that feels personal, enticing, and not spammy.

Examples of AI Prompts for Podcast or Video Content

Developing ideas for podcasts or videos on your own can be exhausting. Thankfully, AI can provide ideas for them and even walk you through the script and development process if you specify it in your prompt.

See the different prompts that can help you create multimedia content.

41. Draft a podcast episode about the latest [industry] trends and innovations that contains [number] minutes of dialogue.

42. Produce [number] of topics for a video series featuring interviews with thought leaders in [industry].

43. Develop a podcast episode discussing the benefits of [products or services] divided into four chapters.

44. Create a video series that showcases customer success stories.

45. Produce a podcast episode on the history and evolution of [brand or industry].

46. Develop a video series on best practices for using [products or services] in [number] of different ways.

47. Create a podcast episode that features an expert roundtable discussion on [industry topics].

48. Produce ideas for a video series featuring a behind-the-scenes look at your company’s operations.

49. Develop a podcast episode that offers tips and advice on succeeding in [industry] as an entrepreneur.

50. Create a video series highlighting the impact of [products or services] on the lives of customers or clients in [demographic].

Examples of AI Prompts for Content Promotion

ai prompt examples 2

Marketers looking for more effective ways to promote their products or services can use AI for best practices. Explore the different channels, tips, and methods this technology can yield using solid AI prompts.

51. Suggest the best time and day of the week to publish a blog post about [topic].

52. Write a press release announcing a new product launch geared toward [target audience] that sounds confident, exciting, and interesting.

53. Generate ideas for outreach emails to promote a new product, including [number] of attention-grabbing subject lines and [number] of clear calls-to-action.

54. Write a guest post for a popular industry blog discussing the impact of [product] on [marketing strategy].

55. Suggest the [number] best hashtags for a social media campaign on [social media platform] to reach [target audience].

56. Draft a script for a 60-second podcast ad [for service/product] that has a friendly tone and witty humor fit for [target audience characteristics].

57. Create a landing page for a new product promotion divided into [number] sections about different benefits based on this description: [insert new product description].

58. Write a script for a TV commercial involving [number] actors in [setting] that promotes [product/service].

59. Draft a product description for an ecommerce site that is [number] sentences long and enticing to [target audience].

60. Generate ideas for cross-promotion with other businesses in the [market], specifically with brands such as [brand names].

Examples of AI Prompts for Repurposing Content

AI can allow marketers to reuse and refresh outdated content to make something new or more useful in the current year — a process we call historical optimization.

When making AI prompts for content repurposing, be creative and see how you can transform your old work into something new.

ai prompt example of repurposing content

Image Source

61. Repurpose a blog post into a video script using this article: [insert old blog post].

62. Turn a webinar into a podcast episode using this pre-existing transcript: [insert old webinar transcript].

63. Repurpose an ebook into a series of [number] blog posts using this pre-existing text: [insert old ebook content].

64. Generate ideas for updating an outdated infographic on [topic] for [year].

65. Rewrite a blog post into a series of [number] social media post series for [social media platform].

66. Turn an old product page into a landing page for a new product using this pre-existing copy: [insert old product page content].

67. Generate ideas for repurposing a white paper into a video series about [topic] using this pre-existing text: [insert old whitepaper content].

68. Rewrite an old email campaign into a new one with updated messaging suited for [season] [year].

69. Turn a research report into a series of social media posts using this information: [facts from the research report].

70. Generate ideas for repurposing an old product demo into a webinar.

Use Thorough AI Prompts for Thorough Results

AI is becoming incredibly useful for marketers in more ways than one. When you leverage this technology, make sure you’re using specific and concise prompts to yield the results your team seeks.

Experiment with different AI tools and AI prompts to find the best results for your needs.

New call-to-action

The post 70 AI Prompt Examples for Marketers to Use in 2024 appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/04/10/70-ai-prompt-examples-for-marketers-to-use-in-2024/feed/ 0
Noob learns Asynchronous Rust. https://prodsens.live/2024/01/22/noob-learns-asynchronous-rust/?utm_source=rss&utm_medium=rss&utm_campaign=noob-learns-asynchronous-rust https://prodsens.live/2024/01/22/noob-learns-asynchronous-rust/#respond Mon, 22 Jan 2024 00:25:11 +0000 https://prodsens.live/2024/01/22/noob-learns-asynchronous-rust/ noob-learns-asynchronous-rust.

I’m basically documenting my journey here. Prerequisites Closure ✨ Function that has no name and can capture variables…

The post Noob learns Asynchronous Rust. appeared first on ProdSens.live.

]]>
noob-learns-asynchronous-rust.

I’m basically documenting my journey here.

Prerequisites

Closure ✨

Function that has no name and can capture variables from surrounding scope.

Creating Closure:

This is an example of a simple closure 👇

let increment = |x| x+1;

Here, whatever is written inside the bracket is the input, and whatever is written blatantly outside; is the output.

Using Closure:

println!("increment of 2 is {}", increment(2));

Syntax & Example:

If the output is static:

let name = |input| output;

If the output is to be evaluated:

let name = |input| -> returnType { expression };

Here’s a proper example👇

let age_allowed = |x: u8| -> &'static str { if x > 17 { "pass" } else { "underage" } };
println!("{}", age_allowed(20));

Thread ✨

Group of code that executes to fulfill a certain functionality.

Naturally, all code runs on the main thread. However, we can spawn additional threads to run more code.

We will learn about ‘uses of threads’ later on, right now we will just cover ‘how to use’ threads.

use std::thread;

fn main() {

    thread::spawn(|| {
        println!("what is real?");
    })

    // thread is a module which has a function called 'spawn'
    // spawn() takes any closure as argument
    // closure returns an evaluated output
    // spawn() spawns a thread and executes the output in it

    println!("this is main");
}

The above code is a basic example of using threads in Rust.

From the above example, sometimes you’ll see “what is real?” getting printed, and sometimes it won’t. This happens because the spawned thread doesn’t get a chance to execute everytime.
This behaviour is entirely depended upon how your device’s OS schedules the threads. This is why we see inconsistent behavior.

To be more clear, the only time spawned thread gets executed is when the main thread is on the wait. In order to maintain consistency, we’ll manually put the main thread on hold until the spawned thread is done executing by:-

Explicitly defining an amount of time for main thread to sleep:

thread::sleep(Duration::from_secs(1));

Using functions that make sure main thread is on wait until specified spawned thread(s) are done executing:

thread::spawn(|| { /* expression */ }).join().unwrap();

Variables & Spawned Threads ✨

Suppose you’re meant to write a code for downloading something. We all know that download occurs in the background without interrupting anything else that you’re doing, meaning you’ll likely spawn a thread for it. 

println!("Which one do you wish to download?: 1 or 2");

You’ll capture user input into a variable from here (duh), but would it be safe to just directly pass on this variable to spawned thread?

OWNERSHIP RULES

  • Whenever you reference a variable, it’s called borrowing.
  • When you don’t, it’s called moving the value or ownership transfer.
  • Now, the problem with references are that they can be invalidated. Suppose creating a variable and referencing it to a function. While that function may still be running; the variable itself may no longer exist (used up or dropped). In such case, the reference will be invalidated; therefore lifetimes exist.
  • However, lifetimes tend to complicate the code therefore we prefer to move the variable into the closures.

BACK TO TOPIC

thread::spawn(move || {
        println!("Downloading option {}...", input);
    }).join().unwrap();

So, this is how you move the value into the closure.

Arc ✨

Due to “Ownership Rules”, once we have moved a variable to a thread, we can no longer use it in other threads.

use::std::thread;

fn main(){
    let something = 5;
    thread::spawn(move || {
            println!("{}", something);
        }).join().unwrap();
    thread::spawn(move || {
            println!("{}", something);  //ERROR
        }).join().unwrap();
}

To get past this, we use a wrapping datatype called Arc.

  • Arc stands for Atomic Reference Count.
  • Since the variable is moved to one thread so it obviously cannot be used in other threads.
  • In order to make our data sharable to multiple-threads, we will wrap it inside a data structure that grants multiple ownership for a variable. Arc is exactly that.
use std::thread;
use std::sync::Arc;

fn main(){
    let something = Arc::new(5);


    let c1 = Arc::clone(&something);
    thread::spawn(move || {

            println!("{}", c1);

        }).join().unwrap();

    let c2 = Arc::clone(&something);
    thread::spawn(move || {

            println!("{}", c2);

        }).join().unwrap();
}

Cool! so now we can share data between multiple threads 🏆.

Interior Mutability - Mutex ✨

If you notice, Arc doesn’t grant mutability alongside of ownership, meaning all the references passed to Arc are immutable. You cannot modify the variable. This is, however, by design.

use std::thread;
use std::sync::Arc;

fn main() {
    let v = Arc::new(vec![1,2,3]);

    let r1 = Arc::clone(&v);
    thread::spawn(move|| {
        r1.push(4);     //ERROR
    }).join().unwrap();
}

“WHY ?”

If a variable is mutable and shared with multiple threads then multiple threads modifying that variable at the same time could lead to race-conditions and crash the program.

To avoid this, variables cloned via Arc from main thread to another threads are immutable.

“But then how will we modify our shared-data?”

Here comes the use of Mutex.
Mutex is technically a datastructure, but in layman’s term it’s a lock that ensure that a variable can be accessed by only one entity at a time. Hence we use another wrapping datatype called Mutex here.
Mutex allows the data to be mutable, but with a twist.

Syntax:

use std::sync::Mutex;

fn main() {
    let m = Mutex::new(5);
    {
        let mut num = m.lock().unwrap(); //locking the data so no other thread can use it at the moment. Using unwrap() because lock() returns result.
        *num = 6;
    }
    println!("m = {:?}", m);
}

MUTEX OVERVIEW

  • Think of the shared-data as washroom, whenever you’d visit the washroom you’d lock the door so nobody else uses it until you’re done. Likewise, whenever you will write on shared-data, you will apply a lock method on it so no other thread uses it.
  • So in hindsight, Mutex does allow mutability but only once you have applied the “lock” method on the shared-data. This is called interior mutability.
  • This is achieved because lock method essentially gives a mutable reference to the data inside the mutex, which is protected by the type system from being accessed without acquiring the lock.
  • The lock method also returns a result so that if a thread is trying to access a locked shared-data then it can be handled eloquently.

You can have a more technical read about Mutex here.

BACK TO TOPIC

use std::thread;
use std::sync::{Arc, Mutex};

fn main() {

    let v = Arc::new(Mutex::new(vec![1,2,3]));

    let r1 = Arc::clone(&v);
    thread::spawn(move|| {

        let mut data = r1.lock().unwrap();
        data.push(4);

    }).join().unwrap();

    let r2 = Arc::clone(&v);
    thread::spawn(move|| {

        let mut data = r2.lock().unwrap();
        data.push(5);

    }).join().unwrap();

    println!("{ :?}", v.lock().unwrap());    
}

So, in hindsight, if your data is read-only then you don’t need Mutex.

Theory of Asynchronous Programming

Tasks ✨

Task refers to work or a specific action that a computer program is designed to perform.

Tasks can be divided into 2 categories:-

  • CPU bound: Require a lot of computational resources. They spend most of their time using the CPU (solving heavy computaional, processing data, etc.).
  • Input/Output bound: How a program communicates with external devices, such as keyboards, mice, disks, networks, etc.

Asynchronous Programming is more associated with I/O than CPU.

I/O Handling ✨ 

Blocking & Non-blocking are two different ways of handling I/O bound tasks in a computer program.

Defination

Blocking I/O: Program that waits for the I/O operations to complete before continuing its execution. For example, if the program wants to read some data from a file, it will call a function that blocks current thread until the data is available for it to read. This means that the program cannot do anything else until it reads the data.
Non-blocking I/O: Program that does not waits for the I/O operation to complete, but returns some kind of result or promise which describes the progress. For example, if the program wants to read some data from a file, it will call a function that instantly returns a result. Result will show ‘pending’ if the data is not available yet. This means that the program can do other things while the I/O operation is in progress.

In Layman’s Terms:

Blocking I/O is like ordering a pizza and waiting for it to be delivered before doing anything else.
Non-blocking I/O is like ordering a pizza and doing other things while the pizza is being prepared and delivered. You can check the status of the pizza from time to time, but you don’t have to wait for it.

The Need for Multi-Threading ✨

Everthing has limits, and so does a thread. 
‘Bottleneck’ refers to that point which represents the peak computational capactity of a thread. If a process were to exceed it, it will cause performance degradation and many more issues.

So when dealing with CPU-bound tasks that demand significant computational power, offloading them to separate threads can prevent bottlenecks. A program can distribute the computational load to other threads, preventing a single thread from monopolizing resources and potentially improving overall performance.

Hence, mulit-threading is only needed when there’s a high computational load on the main thread.

Summary ✨

  • Synchronous: By default, a program handles I/O operations by blocking.Everything runs on the main thread and execution occurs one-by-one.
  • Asynchronous & Single-Threaded: It provides an alternative approach where I/O operations can be non-blocking. Everything runs on main thread and execution may occur in parallel.
  • Asynchronous & Multi-Threaded: When an application requires both I/O and CPU-bound tasks then more threads are spawned to prevent bottleneck.
    I/O operations and CPU-bound tasks run on different threads, and execution is complex.

Starting Asynchronous Programming

We are now equipped with neccesary knowledge to actually start doing asynchronous programming. Honestly, I couldn’t find any resouce more useful for this than the video below:

Anything else that I will write here in continuation will indirectly be just a rip-off of the above video anyway (because the video is so good).

The above video contains topic:

  • Programming in Synchronous, Asynchronous Single-Threaded & –
  • Asynchronous Multi-Threaded.
  • Futures
  • Tokio Runtime
  • Tokio Features

With that, it’ll be all. 🎓
Rest of expertise will only come from working on real-world projects. Some project ideas are - Web Scraper, Chat Application, File Downloader & REST API.

Ciao! 🍹

The post Noob learns Asynchronous Rust. appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/22/noob-learns-asynchronous-rust/feed/ 0
Kubernetes CrashLoopBackOff – What is it and how to fix it? https://prodsens.live/2024/01/05/kubernetes-crashloopbackoff-what-is-it-and-how-to-fix-it/?utm_source=rss&utm_medium=rss&utm_campaign=kubernetes-crashloopbackoff-what-is-it-and-how-to-fix-it https://prodsens.live/2024/01/05/kubernetes-crashloopbackoff-what-is-it-and-how-to-fix-it/#respond Fri, 05 Jan 2024 09:24:48 +0000 https://prodsens.live/2024/01/05/kubernetes-crashloopbackoff-what-is-it-and-how-to-fix-it/ kubernetes-crashloopbackoff-–-what-is-it-and-how-to-fix-it?

Introduction CrashLoopBackOff is an error that appears most of the time when a container repeatedly fails to restart…

The post Kubernetes CrashLoopBackOff – What is it and how to fix it? appeared first on ProdSens.live.

]]>
kubernetes-crashloopbackoff-–-what-is-it-and-how-to-fix-it?


refine repo

Introduction

CrashLoopBackOff is an error that appears most of the time when a container repeatedly fails to restart in a pod environment. Kubernetes will try to auto-restart a failed container, but when this is not sufficient, the pod will be restarted with an exponential backoff delay. Backoff delay begins from a small value but grows exponentially whenever an unsuccessful attempt occurs. Eventually, it goes into the CrashLoopBackOff state, where Kubernetes gives up.

A Closer Look at CrashLoopBackOff

Defining CrashLoopBackOff as a status message

In Kubernetes, a status message indicates the state of a pod and its containers. This shows as you execute the kubectl get pods command, which lists the pods in your clusters. A pod status message would indicate whether it was ready, running, pending, failed, or in a crashloopbackoff. The message CrashLoopBackOff indicates repeated crashes of a container within a pod that Kubernetes cannot restore.

Highlighting the difference between this and other statuses like Pending, Running, and Failed

Other statuses such as Pending, Running, and Failed, which have different meanings and implications, differ from CrashLoopBackOff.

Pending: One or more containers have not started; however, the Kubernetes system has accepted the pod.

Running: The pod has connected to a node, and all the containers have been created. At least one container has already started running or is in the process of starting or restarting.

Failed: The pod contains all dead containers. Containers in the pod indicate at least one failure. Failure, in this case, refers to a non-zero exit code or stopped by the system.

CrashLoopBackOff: Another more severe status than the failed one is CrashLoopBackOff, which indicates that a container doesn’t work even after several restarts made by Kubernetes.

Common Causes of CrashLoopBackOff

Errors in Kubernetes Deployment

The impact of deprecated Docker versions

The usage of incompatible and deprecated Docker versions could lead to some errors while deploying in the Kubernetes Environment. Deprecated Docker versions can have various impacts on your deployment, such as:

  • It implies poor performance, security, or compatibility that may impact the quality and stability of your deployment.
  • Errors, failures, or unexpected outputs can cause your deployment to not work as intended or crash.
  • Data loss or corruption may cause deployment failure or compromise your data.

Recommendations for maintaining version consistency

When deploying Kubernetes, and due to the impact that deprecated or outdated versions of Docker could have, certain recommendations should be seriously given due consideration to maintain security and a smooth and consistent experience.

These are as follows:

  • An upgrade to the newest, most stable versions of Docker Engine and Kubernetes would be a wise step in favor of your deployment. It is important to scrutinize the release notes and deprecation page to verify any changes that may apply to your set-up.
  • It is preferred to use explicit and sematic tagging because it is more reliable than using the default ‘latest’ tag. When the default ‘latest’ tag changes in the future, it will impede the overall consistency of the system.
  • It is a good practice to induce a multistaged build process. This generally includes fewer layers and smaller images. Ultimately, this leads to optimized performance of your server, hence enhancing the efficiency of all deployments involved.
  • Lastly, and equally as importantly, try to pick an image base that remains persistently secure, manageable, and compatible across various platforms.

The Output below can help us to identify any version discrepancies:

kubernetes crashloopbackoff

Missing Dependencies

Importance of runtime dependencies

For Kubernetes running container-based applications, the runtime dependencies must be working correctly. The meaning of dependencies here is libraries, configurations, and other resources that are required for the smooth working of the application. The importance of these dependencies is as follows:

Functionality and Features:
Some of the functionalities and features are driven by certain specified dependencies that the application will require.

Efficiency and utilization of resources:
The common component, available through external dependencies, helps optimize the resource utilization of applications, reduces duplication, and follows a cost-saving approach.

Isolation and modular approach:
When Isolating different sections from each other, dependencies follow a modular design approach. This makes a simplified design for development, maintenance or troubleshooting purposes.

Common scenarios where such dependencies go missing

Incomplete Container Images:
Missing any important dependency that is supposed to run during the development will ultimately result in a failure of the container image.

Configuration Errors:
If your configuration lacks either environment variables or mount paths, the deployment will be missing essential runtime data.

Network Issues:
Network failures can cause external dependencies to be unreachable. In that case, the application will not have the necessary resources.

Version Incompatibility:
Sometimes, different applications need certain versions of libraries and packages. Thus, it may result in missing dependencies if the expected libraries and packages of the application do not match the deployed version.

Volume Mount Issues:
If the configuration for volume mount is not set up properly, necessary data files or configuration may fail to load, potentially resulting in missing dependencies.

Let’s say we have applied a configuration with a missing volume dependency. In the Output below, you can see that the console has thrown an error after applying the configuration:

kubernetes crashloopbackoff

Repercussions of Recent Updates

The recent updates to your code, dependencies, configurations, and the environment can also result in a change in your deployment.

How frequent changes can lead to instability

Your code and dependencies will have bugs and inconsistencies when you make regular changes. Success is hindered by frequent modifications because of difficulty in locating the exact problem since there were many updates.

Strategies for safer and more stable updates

However, any update should be done with caution to ascertain their safety and stability. Some of the strategies for safer and more stable updates are as follows:

  • Be specific on the exact versions of your project’s dependencies; however, this will prevent automatic updates to newer versions that might introduce breaking changes or incompatibilities.
  • By implementing feature flags, new features can be turned off and on without deploying new code. This allows you to test new features in production with a subset of users and roll them back quickly if issues arise.
  • All your environments should be consistent with each other (development, staging, production). This reduces the chances of encountering unexpected behaviors in production that weren’t present during development or testing.
  • Rather than updating all instances or users at once, gradually roll out changes to a small percentage of users and progressively increase this number. This helps in identifying issues with minimal impact.

Troubleshooting the CrashLoopBackOff Status

Discovery and Initial Analysis

Identifying the pods in a restart loop

To understand and solve the CrashLoopBackOff issue, identifying and examining the affected pods is essential. The kubectl get pods command allows viewing the pods in the cluster along with their statuses. The -n option shows the namespace, and -o wide displays full details such as the node name and restarts.

The Output of this command will look something like this:

kubernetes crashloopbackoff

In-depth Pod Examination

Using the kubectl describe pod command for detailed insights

The kubectl describe pod POD_NAME_HERE command is useful in gaining a more insightful understanding of the container crash and examining the pod in detail to troubleshoot the CrashLoopBackOff status as it retrieves detailed information about the container spec, pod spec and events.

In the Output below, you can see highlighted words like Backoff, Failed, CrashLoopBackOff and so on. These words reflect the problem with the pod as well as the container and help you to narrow down the possible causes of the issue. For Example, in our case, the failed reason indicates that the pod cannot run the command ‘Run’, which does not exist:

kubernetes crashloopbackoff

Key Details to Focus On

To resolve the CrashLoopBackOff status, you need details that would lead to identifying and resolving the error. By focusing on the key details below, you can effectively resolve issues related to CrashLoopBackOff status:

Start time: This will help you note when the pod was created or restarted. Look at this time in comparison with the events or logs and observe if there is any correlation or recurrent pattern.

Mounts: These refer to the volumes that are attached to the pod or container. Check for any issues related to permissions, paths or formats that may be causing problems with the mounts.

Default tokens: These are service account tokens that are automatically connected to the pod or container. Verify if there are any problems with expiration, revocation or authentication.

Events: These records document actions and changes within the pod or container. Look out for any errors, warnings or messages in the events log that might provide insight into what caused the crash.

Strategically Using CrashLoopBackOff

Leveraging the status for effective troubleshooting

In Kubernetes, efficient troubleshooting relies heavily on making use of the CrashLoopBackOff status. When a pod is starting up, this status is a signal that there is something wrong, and it should be noted for further investigation. The underlying cause can be analyzed by looking at the logs of the failed pod.

Through this status, recognition and fixing of issues like resource constraints, absence of dependencies and configuration errors would be made possible, leading to an easier startup.

The role of CrashLoopBackOff in CI/CD workflows

CrashLoopBackOff in CI/CD workflows identifies issues and helps you resolve them within your application. CI/CD workflows automate software development, testing and deployment processes. When there are errors or failures during deployment, CrashLoopBackOff can help you identify configuration errors, missing dependencies or incompatible versions that may exist.

Moreover, CrashLoopBackOff guarantees that each component of code and all of its dependencies are thoroughly examined and verified on your Kubernetes cluster. To do this, you can use continuous integration technologies and automated testing to validate and verify your code, including dependencies.

Conclusion

This article has discussed the CrashLoopBackOff error in great detail. It is one of the common errors of Kubenetes and one of the complex ones as well. Complex to diagnose because the root cause can be one of the many. Taking advantage of advanced diagnostic tools provides better insights about container and pod behavior than basic diagnostic commands commonly seen in Kubernetes environments.

Tools such as kubectl logs for detailed container log analysis, kubectl exec to execute commands inside containers, and kubectl port-forward, which connects local ports to pods are used. Adopting methods like container debugging with kubectl debug would offer a broader approach to resolving challenges experienced while deploying on Kubernetes.

Author: Muhammad Khabbab

The post Kubernetes CrashLoopBackOff – What is it and how to fix it? appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/05/kubernetes-crashloopbackoff-what-is-it-and-how-to-fix-it/feed/ 0
TypeScript vs JavaScript – A Detailed Comparison https://prodsens.live/2023/12/22/typescript-vs-javascript-a-detailed-comparison/?utm_source=rss&utm_medium=rss&utm_campaign=typescript-vs-javascript-a-detailed-comparison https://prodsens.live/2023/12/22/typescript-vs-javascript-a-detailed-comparison/#respond Fri, 22 Dec 2023 10:26:10 +0000 https://prodsens.live/2023/12/22/typescript-vs-javascript-a-detailed-comparison/ typescript-vs-javascript-–-a-detailed-comparison

Introduction TypeScript is a statically typed superset of JavaScript, the inherently dynamically typed, high-level scripting language of the…

The post TypeScript vs JavaScript – A Detailed Comparison appeared first on ProdSens.live.

]]>
typescript-vs-javascript-–-a-detailed-comparison


refine repo

Introduction

TypeScript is a statically typed superset of JavaScript, the inherently dynamically typed, high-level scripting language of the web. It bases itself on the core features of JavaScript and — as the primary mission — extends them with compile time static typing. It also comes with several feature extensions: perhaps most notably enums, class instance types, class member privacy and class decorators. TypeScript offers more advanced additions and considerations with respect to iterators, generators, mixins, modules, namespacing, JSX support, etc., which JavaScript developers would find different and more nuanced towards static typing — as they get familiar with them over time.

In this post, we first shed some light on important concepts related to static typing and learn about the capabilities and advantages they offer TypeScript over JavaScript. We gained insight into the role of type definitions, type annotations and type checking using the ** structural type system**. While doing so, we recall primitive types (number, string, boolean) in JavaScript that lay the foundations of more complex type definitions in TypeScript. We also get to hear about literal types (string, array and object) and additional types (any, unknown, void, etc.) that TypeScript adds to its typing toolbelt and a host of type utilities (Awaited<>, Pick<>, Omit<>, Partial<>, Record<>, etc.) that are used to derive new types from existing ones.

Towards the latter half, we explore the tools that run the processes that facilitate static typing in TypeScript. We get a brief account of how TypeScript code is transpiled with the TypeScript compiler (tsc) to runtime JavaScript code. We also get to understand that TypeScript’s type checker is integrated to the tsc for performing type checking and emitting errors that help developers write type-safe code by fixing type related bugs early in development phase.

We get a quick rundown of the TS type checker’s tooling roles: namely editor and linguistics support with code completion, quick fix suggestions, code formatting / reorganizing, code refactoring, and code navigation. We also find out how all the features of the TypeScript compiler is integrated in VS Code with the help of background task runners — something that offers better developer experience by helping to avoid running the tsc repeatedly.

Towards the end, we explore notable feature extensions that TypeScript brings to the table — particularly enums, class instance types, class member privacy and decorators. Finally, we point to implementations of more advanced features such as iterators, generators, mixins, etc.

Steps we’ll cover in this post:

  • TypeScript Concepts

    • TypeScript Concepts – Static vs Dynamic Typing
    • TypeScript Concepts – Type Definitions
    • TypeScript Concepts – Type Annotation and Inference
    • TypeScript Concepts – Type Checking and the Structural Type System
  • TypeScript Tools

    • tsc, the TypeScript Compiler
    • TypeScript Type Checker
    • TS Type Checker – Linguistic Tooling
    • TypeScript Support in VS Code
  • TypeScript Type Definitions / Declaration Packages

    • TypeScript Type Declaration – .d.ts Files
    • TypeScript Type Packages – DefinitelyTyped @types
  • TypeScript’s Extended Features

    • TypeScript Extensions – Enums
    • TypeScript Extended Features – Classes as Types
    • TypeScript Extended Features – Class Member Visibility
    • TypeScript Extended Features – Class Decorators
    • TypeScript Advanced Features

TypeScript Concepts

TypeScript Concepts – Static vs Dynamic Typing

JavaScript is inherently dynamically typed. It means that types of values of expressions in JavaScript are set at runtime, not before that. Dynamic typing leads to different kinds of type errors and unaccounted for behaviors in JavaScript code, especially at the hands of inexperienced developers tasked with scaling an application. And as a codebase grows, maintainability becomes a major concern.

Microsoft created TypeScript to add a static typing system on top of JavaScript. It was open sourced in 2012 to help write error-prone, stable, maintainable and scalable web applications. Static typing is an implementation where types of expressions are determined before runtime. Particularly in TypeScript, static typing takes place before compilation carried out by tsc, the TypeScript compiler.

Static typing involves three major steps: type declaration, type annotation and type checking. Type checking refers to matching and validating type conformance of the value of an expression to its annotated / inferred type with the help of TypeScript’s static type checker.

TypeScript Concepts – Type Definitions

Integral to static typing is declaring proper type definitionss for entities in an application. Generally, a type can be declared with an alias or it can be an interface. Types are also generated from TypeScript enums as well as classes. These options allow developers to declare and assign consistent types to expressions in a web application and help prevent type errors and unanticipated bugs at runtime.

Primitive data types (number, string boolean, null and undefined) that are already part of JavaScript usually lay the foundations of more complex, nested type definitions in TypeScript. TypeScript adds some more static types to its toolbelt like any, unknown, void, never, etc., to account for different scenarios.

Apart from these, TypeScript also offers a swathe of type utilities that help transform one type to another. Example type utilities include Awaited<>, Pick<>, Omit<>, Partial<>, Record<>, etc. Such utilities are not relevant in JavaScript but in TypeScript, they are handy for deriving useful variants from a base type. Using them adds stability to an otherwise brittle JavaScript codebase and helps make large web applications easily tweakable and maintainable.

TypeScript Concepts – Type Annotation and Inference

Adding proper type annotations to expressions is perhaps the most crucial part of static typing in TypeScript. It is necessary for subsequent verification of type conformance performed by TypeScript’s type checker.

Type Annotations in TypeScript

Type annotations are done explicitly on expressions using primitive data types and / or — as mentioned above — types defined using aliases, interfaces, enums and classes.

Target expressions for type annotation are variable declarations, function declarations, function parameters, and function return types. Annotations are also made to class fields and other members such as methods and accessors — along with their associated parameters and return types.

Type Inference in TypeScript

Where type annotations are not explicitly added, TypeScript infers the type from the primitive type, literals or its object shape of the value itself.

Type inference may follow the below two principles:

  • Best common type: where TypeScript assigns a common type that encompasses all items. This is useful when inferring a common type from an array literal with items composed of primitives. For example, for an array with items of type number and string, the inferred type is the following best common type:
const x = [0, 1, "two"]; // const x: (number | string)[]
  • Contextual typing: where the type of the expression is inferred from the lexical context in which it is being declared. See an example here

TypeScript Concepts – Type Checking and the Structural Type System

A particular value of an expression is checked for validity against its annotated or inferred type by TypeScript’s type checker. Type compatibility depends on whether the structure or shape of the value matches that of the annotated one. In other words, TypeScript has a structural type system.

In structural type systems, the shape of the value of an expression must conform to that of the annotated type. Besides, it can be compatible with another type that is identical or equivalent in shape.

For example, stereoTypicalJoe below is typed to User:

type User = {
  username: string;
  email: string;
  firstName: string;
  lastName: string;
};

type Person = {
  username: string;
  email: string;
  firstName: string;
  lastName: string;
};

type Admin = {
  username: string;
  email: string;
  firstName: string;
  lastName: string;
  role?: string;
};

// `stereotypicalJoe` is compatible with `User`
const stereoTypicalJoe: User = {
  username: "stereotypical_joe",
  email: "joe_stereo@typed.com",
  firstName: "Joe Stereo",
  lastName: "Typed",
};

Thanks to TypeScript’s structural type system, it is also compatible with Person because Person is structurally identical to User:

// It is also compatible with `Person` which is identically typed to `User`
const stereoTypicalJoe: Person = {
  username: "stereotypical_joe",
  email: "joe_stereo@typed.com",
  firstName: "Joe Stereo",
  lastName: "Typed",
};

TypeScript also allows stereoTypicalJoe to be compatible with Admin type, because equivalent types ( role being an optional property in Admin) are compatible:

// Structural Type System allows `stereoTypicalJoe` to be compatible with `Admin` which is equivalently typed to `User`
const stereoTypicalJoe: Admin = {
  username: "stereotypical_joe",
  email: "joe_stereo@typed.com",
  firstName: "Joe Stereo",
  lastName: "Typed",
};

Structural compatibility in TypeScript is practically the appropriate option for annotating and validating the types of JavaScript expressions, because shapes of objects in web applications remain identical or similar while their composition varies greatly. This is in contrast to the nominal type system, which decides type conformance strictly based on specifically named types.

In JavaScript, since types are tagged to a value at runtime, there is no static type annotation involved. Hence the need for it in TypeScript.

TypeScript Tools

tsc, the TypeScript Compiler

The central tool that TypeScript uses for running processes related to static typing is the TypeScript compiler, tsc. The ultimate job of the TS compiler is to transform statically typed code to execution-ready pure JavaScript code. This means that the type definitions and annotations that we add inside a .ts or .tsx file, are erased after compilation. In other words, the output .js or .jsx files are not passed the static typing we add to corresponding TS files in the first place.

For example, the following TypeScript code:

function greet(person: string, date: Date) {
  console.log(`Hello ${person}, today is ${date.toDateString()}!`);
}

greet("Joe", new Date());

compiles to the JS script below:

"use strict";
function greet(person, date) {
  console.log("Hello ".concat(person, ", today is ").concat(date.toDateString(), "!"));
}
greet("Joe", new Date());

Notice that the type annotations applied in the .ts file are not output to the .js version. But we would know from TypeScript type checker that they were type validated.

In the interim, what we get is a chance to apply a consistent type system to validate the type safety and stability of our code — something we cannot perform with JavaScript alone.

TS Compiler Configuration

The TS compiler is generally configured with a default set of standard options inside the tsconfig.json file. And we can tailor it according to our needs and preferences. Particularly from the compilerOptions object, we can set options for a target ECMAScript version, type checking, modules scanning, experimental features, etc.

{
  "compilerOptions": {
    "target": "es5",
    "lib": ["dom", "dom.iterable", "esnext"],
    "allowJs": true,
    "skipLibCheck": true,
    "esModuleInterop": true,
    "allowSyntheticDefaultImports": true,
    "strict": true,
    "forceConsistentCasingInFileNames": true,
    "noFallthroughCasesInSwitch": true,
    "module": "esnext",
    "moduleResolution": "node",
    "resolveJsonModule": true,
    "isolatedModules": true,
    "noEmit": true,
    "jsx": "react-jsx",
    "experimentalDecorators": true
  },
  "include": ["src"]
}

TypeScript Type Checker

The tsc is equipped with a static type checker that checks the value of an expression against its annotated or inferred type. It emits type error(s) in the event of failed matches. The primary goal of the type checker is to check for type conformance. It’s broader goal is to ensure type safety of a code base by catching and suggesting corrections for all possible kinds of type errors during development.

Type errors can originate from typos, change of API interfaces, incomplete/inaccurate type definitions, incorrect annotations, incorrect assertions, etc.

The errors are output by the compiler to the command line console when a file is run with tsc command:

tsc hello.ts

The TS type checker keeps track of the information from type definitions and annotations in our codebase. It then uses these descriptions to validate structural/shape conformance or otherwise throw an error. Type checking is performed during code changes and before compilation runs.

TS Type Checker – Linguistic Tooling

The TS type checker keeps track of updated types information while we write our code. This allows it to catch bugs and also help us prevent them in the first place. We can correct typos, type errors and possible non-exception failures as they get caught and emitted by the type checker.

Based on the type descriptions it keeps, the type checker can also help us with code completion, quick fix suggestions, refactoring, formatting/reorganization, code navigation, etc.

TypeScript Support in VS Code

Microsoft’s Visual Studio Code, or VS Code in short, comes with integrated support for the TypeScript compiler, its static type checker, and other linguistic tooling mentioned above. It runs the tsc and the static type checker with the help of watch mode background task runners in the code editor.

For example, VS Code’s IntelliSense runs the TypeScript static type checker in the background to provide code completion on typed expressions:

TypeScript vs JavaScript

Below are a list of other major VS Code features that aid everyday TS developers:

  • Type errors: type errors are highlighted inside the editor. When hovered over, we can see the error warnings. Error highlighting helps us to investigate and fix the errors easily.

  • Quick fix suggestions: associated quick fix suggestions are provided when hovered on a error. We can use the editor’s automatic fix or fix them ourselves.

  • Syntax errors and warnings: syntax errors are highlighted by VS Code’s lingguistic support for TypeScript. It helps fix them instantly.

  • Code navigation: we can quickly navigate a particular code snippet by looking it up using shortcuts. Code navigation helps us avoid errors by gaining clarity on lookup.

VS Code also provides formatting/reorganizing, refactoring debugging features as well. All these features help us write error-prone, stable code that contributes to an application’s maintainability and scalability.

TypeScript Type Definitions / Declaration Packages

TypeScript comes with built-in definitions for all standard JavaScript APIs. They include type definitions for objects types like Math, Object, browser related DOM APIs, etc. These can be accessed from anywhere in the project without the need to import the types.

Apart from built-in types, application specific entities have to be typed properly. It is a convention to use separate type declaration files in order to differentiate type definitions from features code.

TypeScript Type Declaration – .d.ts Files

Application specific type declarations are usually collected in a file suffixed with .d.ts. It is common to declare all types and interfaces inside a single index.d.ts file and export them from there.

export interface User {
  username: string;
  email: string;
  firstName: string;
  lastName: string;
}

export interface Person {
  username: string;
  email: string;
  firstName: string;
  lastName: string;
}

export interface Admin {
  username: string;
  email: string;
  firstName: string;
  lastName: string;
  role?: string;
}

While annotating, we have to import each type inside the file we are using it:

// highlight-next-line
import { User, Person, Admin } from "src/interfaces/index.d.ts";

const anyUser: User = {
  username: "stereo_joe",
  email: "joe@typed.com",
  firstName: "Joe",
  lastName: "Typed",
};

const typicalJoe: Person = {
  username: "typical_joe",
  email: "joe_typical@typed.com",
  firstName: "Joe Structure",
  lastName: "Typed",
};

const stereoTypicalJoe: Admin = {
  username: "stereotypical_joe",
  email: "joe_stereo@typed.com",
  firstName: "Joe Stereo",
  lastName: "Typed",
};

TypeScript Type Packages – DefinitelyTyped @types

Existing JavaScript libraries that support a TypeScript version offer type definition packages for use with TypeScript. DefinitelyTyped is a popular type definition repository that hosts collections of type definition packages for major JS libraries that also support TypeScript.

Type definition packages hosted by DefinitelyTyped are scoped under the @types/ directory. We can get the necessary definition packages with npm or yarn. For example, the react type definitions can be included inside node_modules with the following scoped package:

npm install --save-dev @types/react

Then, we can use the types inside our app. It is important to notice that unlike in the case of using .d.ts declaration files, we don’t need to import the types from their node_modules files. That’s because they are made available automatically by npm.

TypeScript’s Extended Features

Apart from implementing a static type system to produce error-prone, maintainable and scalable codebase, TypeScript extends the language with additional features with their respective syntax. TypeScript enums are such an addition that injects type objects to JavaScript runtime. TypeScript classes are implemented in a way that produces types. Some aspects of TS classes, such as member privacy and decorators are implemented in different ways than in JavaScript.

In the following sections, we try to understand how they contrast.

TypeScript Extensions – Enums

TypeScript adds a special data structure called enum to address the need for data organization around an intent — like defining a set of categories or a strict set of subscription options. enums are not available in JavaScript. In TypeScript, an enum introduces a representative JavaScript object to runtime. It can then be accessed by subsequent code to get its values or that of its properties.

Enums serve as efficient replacement of objects that would otherwise be stored and accessed from a database table. They inherently generate types that can be used to annotate expressions or object properties. You can find an in-depth example of TS Enums in this refine.dev blog post.

TypeScript Extended Features – Classes as Types

In TypeScript, classes also generate types from its constructor function. An instance of a given class is by default inferred during static type checking the type generated from the class. For a detailed example, check this refine.dev blog post.

In contrast, class instances in JavaScript are tagged their types during runtime.

TypeScript Extended Features – Class Member Visibility

TypeScript supports class member visibility since ES2015. It implements member privacy at three levels: public, protected and private. Privacy of class members in TypeScript is modeled according to prototypal inheritance based object oriented concepts.

For example, public members are accessible from everywhere, as in instances, the class itself as well as subclasses. protected members are not accessible from instances, they are only accessible from the class and its subclasses. private members are only accessible from inside the class.

In contrast, starting ES2022, JavaScript implements class property privacy using the # syntax. Property access in JavaScript classes can either be totally public or totally private. In addition, a class property’s privacy in JavaScript is not inheritable, because it is not accessible from the prototypal chain of the class instance.

TypeScript Extended Features – Class Decorators

Decorators are a design pattern in programming. In any programming language, the decorator pattern gives an interface to add behavior to a class instance dynamically without affecting other instances. It is possible to easily implement decorators with JavaScript, especially with clever functional programming. Not to mention with TypeScript.

However, TypeScript has a Stage 3 proposal that brings class decorators with a special @ syntax. It is quite distinct from conventional decorator implementation with JavaScript and TypeScript. Class decorators in TypeScript allow classes and their members to be decorated with runtime behaviors. The class itself can be decorated, so can fields, methods and accessors be. For a comprehensive example of class decoratiors, please check this blog post on refine.dev.

TypeScript Advanced Features

Other extended features in TypeScript are related to iterators and generators, mixins, modules, namespacing, JSX support, etc.

Most of these advanced concepts require special considerations to facilitate relevant static typing. For example, TypeScript iterators and generators have to implement the Symbol.iterator property and they should be annotated the Iterable interface. TypeScript mixins make use of complex relationships between class instance types, subtypes, multiple interface implementations, class member privacy, prototypal inheritance and class expressions. Too much, yet too good…

Getting a good grasp of these advanced TypeScript features require gradual adoption of the language as a whole, as we aim to keep our codebase type-safe, stable and our web application maintainable and scalable.

Summary

In this post, we compared TypeScript with JavaScript. While trying to make the comparisons, we gained useful insights into how the two types of systems and their implementations differ. We got a high-level view of the role of the TypeScript compiler, the mechanisms of the static type checker in catching and preventing type errors, and the linguistic tooling that helps developers write error-prone and highly stable application code. We also contextualized some of TypeScript’s notable extended features that differ from those in JavaScript in light of TypeScript’s static type system.

Author: Abdullah Numan

The post TypeScript vs JavaScript – A Detailed Comparison appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/22/typescript-vs-javascript-a-detailed-comparison/feed/ 0
How much were product managers paid in 2023? https://prodsens.live/2023/12/22/how-much-were-product-managers-paid-in-2023/?utm_source=rss&utm_medium=rss&utm_campaign=how-much-were-product-managers-paid-in-2023 https://prodsens.live/2023/12/22/how-much-were-product-managers-paid-in-2023/#respond Fri, 22 Dec 2023 10:25:37 +0000 https://prodsens.live/2023/12/22/how-much-were-product-managers-paid-in-2023/ how-much-were-product-managers-paid-in-2023?

We’ve experienced a lot of economic uncertainty over the last few years. What’s been the impact on hiring…

The post How much were product managers paid in 2023? appeared first on ProdSens.live.

]]>
how-much-were-product-managers-paid-in-2023?

We’ve experienced a lot of economic uncertainty over the last few years. What’s been the impact on hiring trends and product manager salaries? Read more »

The post How much were product managers paid in 2023? appeared first on Mind the Product.

The post How much were product managers paid in 2023? appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/22/how-much-were-product-managers-paid-in-2023/feed/ 0
Run Go + HTMX in the Cloud with Acorn https://prodsens.live/2023/12/18/run-go-htmx-in-the-cloud-with-acorn/?utm_source=rss&utm_medium=rss&utm_campaign=run-go-htmx-in-the-cloud-with-acorn https://prodsens.live/2023/12/18/run-go-htmx-in-the-cloud-with-acorn/#respond Mon, 18 Dec 2023 04:25:01 +0000 https://prodsens.live/2023/12/18/run-go-htmx-in-the-cloud-with-acorn/ run-go-+-htmx-in-the-cloud-with-acorn

Introduction In a recent article, I demonstrated how babyapi, a library I created, makes it easy to write…

The post Run Go + HTMX in the Cloud with Acorn appeared first on ProdSens.live.

]]>
run-go-+-htmx-in-the-cloud-with-acorn

Introduction

In a recent article, I demonstrated how babyapi, a library I created, makes it easy to write a TODO app with a RESTful API and HTMX frontend using only 150 lines of code. babyapi abstracts the HTTP handling based on a provided struct and serves HTMX templates for a dynamic frontend.

It works great in a tutorial, but you might have been left thinking: “what about persistent storage and running in the cloud?”

This simple tutorial will show you how to connect your babyapi application to Redis storage and quickly run in the cloud using the free Sandbox from Acorn.

Storage

The babyapi.Storage interface and the SetStorage modifier allow implementing any storage backend for your application. As mentioned in the previous article, the babyapi/storage package provides a generic implementation of the interface with helpers for setting up local file or Redis storage. This time, since the goal is to run in the cloud rather than just locally, we’ll use the Redis version:

db, err := storage.NewRedisDB(redis.Config{
    Server:   host + ":6379",
    Password: password,
})
if err != nil {
    return fmt.Errorf("error setting up redis storage: %w", err)
}

api.SetStorage(storage.NewClient[*TODO](db, "TODO"))

The full example code for this tutorial is available in the babyapi GitHub repository

With this simple addition, the TODO application is ready to connect to a Redis instance and run in the cloud.

Acorn

If you’re not already familiar with Acorn, I recommend checking out the official docs to learn more about it! Basically, it is an app platform that makes it easy to deploy cloud applications and their dependencies by describing them in a simple Acornfile. Instead of configuring all of the required Kubernetes manifests to run our application in the cloud, we can just use an Acornfile.

First, we need a Dockerfile to create our app container. Then, instead of building and pushing this image to a container registry and writing a Kubernetes manifests (by copy/pasting from an online example if we’re being honest), let’s take a look at Acorn.

Before deploying the updated TODO app, we need the Redis database dependency. If you search “run redis in k8s”, all of the top results look exhausting. Alternatively, the Acorn documentation for Redis looks much simpler.

We can even use the Acornfile from the documentation’s example with one little change to build and run the TODO app with Redis database:

services: db: {
    image: "ghcr.io/acorn-io/redis:v7.#.#-#"
}

containers: app: {
    build: {
        context: "."
    }
    consumes: ["db"]
    ports: publish: "8080/http"
    env: {
        REDIS_HOST: "@{service.db.address}"
        REDIS_PASS: "@{service.db.secrets.admin.token}"
    }
}

Not only did this save us the effort of configuring everything for the TODO app, we even have the entire Redis dependency with a persistent volume and a random password. This is already way better than the no-volume, no-password K8s manifests I would thrown together for this example.

Run it for real

Now that you have seen how it all works, you can run it for yourself using this button:

Run in Acorn

At the time of posting this, Acorn offers free Sandbox accounts to anyone with a GitHub account.

Acorn UI

Once it has finished deploying, open the endpoint and append /todos in the URL to reveal the HTMX UI! You can even use the babyapi CLI from your terminal to create a new TODO and watch it show up in the UI automatically:

export ACORN_ADDR=http://COPY_ENDPOINT_FROM_ACORN

go run -mod=mod 
  github.com/calvinmclean/babyapi/examples/todo-htmx 
  -address $ACORN_ADDR 
  post TODOs '{"title": "use babyapi on Acorn!"}'

Conclusion

The availability of app platforms like Acorn and easy-to-use libraries like babyapi take care of the boring and tedious parts of software engineering and let you focus on the things that interest you. Try them out for yourself and let me know what you have any feature requests or issues with babyapi.

Thanks for reading!

The post Run Go + HTMX in the Cloud with Acorn appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/18/run-go-htmx-in-the-cloud-with-acorn/feed/ 0
Using Google Cloud Vertex AI Code Chat to automate programming test scoring https://prodsens.live/2023/12/18/using-google-cloud-vertex-ai-code-chat-to-automate-programming-test-scoring/?utm_source=rss&utm_medium=rss&utm_campaign=using-google-cloud-vertex-ai-code-chat-to-automate-programming-test-scoring https://prodsens.live/2023/12/18/using-google-cloud-vertex-ai-code-chat-to-automate-programming-test-scoring/#respond Mon, 18 Dec 2023 04:24:52 +0000 https://prodsens.live/2023/12/18/using-google-cloud-vertex-ai-code-chat-to-automate-programming-test-scoring/ using-google-cloud-vertex-ai-code-chat-to-automate-programming-test-scoring

Problem Traditionally, educators create unit tests to automatically score students’ programming tasks. However, the precondition for running unit…

The post Using Google Cloud Vertex AI Code Chat to automate programming test scoring appeared first on ProdSens.live.

]]>
using-google-cloud-vertex-ai-code-chat-to-automate-programming-test-scoring

Problem

Traditionally, educators create unit tests to automatically score students’ programming tasks. However, the precondition for running unit tests is that the project codes must be runnable or compiled without errors. Therefore, if students cannot keep the project fully runnable, they will only receive a zero mark. This is undesirable, especially in the programming practical test situation. Even if students submit partially correct code statements, they should earn some scores. As a result, educators will need to review all source codes one by one. This task is very exhausting, time-consuming, and hard to grade in a fair and consistent manner.

Solution — Scoring a programming code with LLM

For a programming test, we provide starter code to students. They are required to read the instructions and write additional code to meet the requirements. We have a standard answer already. We will store the question name, instructions, starter code, answer and mark in an Excel sheet. This sheet will be used to prompt and score student answers.

1. Crafting the Chat Prompt:

  • Design a comprehensive chat prompt that incorporates essential elements such as “instruction”, “starter”, “answer”, “mark”, “student_answer”, and “student_commit”.
  • Utilize “Run on Save” functionality to encourage students to commit their code regularly upon saving. This serves as a reliable indicator of their active engagement and honest efforts.

2. Setting Up the LLM:

  • Create an LLM within Vertex AI, specifically the codechat-bison model.
  • Configure the temperature setting to a low value since scoring doesn’t necessitate creative responses.

3. Utilizing PydanticOutputParser:

  • Employ PydanticOutputParser to generate the desired output format instructions and extract them into a Python object.

4. Connecting the Components:

  • Seamlessly integrate all the aforementioned components to establish a smoothly functioning chain. This ensures efficient prompt management and effective LLM utilization.

https://github.com/wongcyrus/GitHubClassroomAIGrader/blob/main/gcp_vertex_ai_grader.ipynb

from langchain.chat_models import ChatVertexAI
from langchain.prompts.chat import ChatPromptTemplate
import langchain
langchain.debug = False
from langchain.output_parsers import PydanticOutputParser
from langchain.pydantic_v1 import BaseModel, Field, validator
from langchain.prompts import PromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate

# Define your desired data structure.
class ScoreResult(BaseModel):
score: int = Field(description="Score")
comments: str = Field(description="Comments")
calculation: str = Field(description="Calculation")

parser = PydanticOutputParser(pydantic_object=ScoreResult)



def score_answer(instruction, starter, answer, mark, student_answer, student_commit, temperature=0.1,prompt_file="grader_prompt.txt"):
template = "You are a Python programming instructor who grades student Python exercises."
with open(prompt_file) as f:
grader_prompt = f.read()

data = {"instruction": instruction,
"starter": starter,
"answer": answer,
"mark": mark,
"student_answer": student_answer,
"student_commit": student_commit}

prompt = PromptTemplate(
template="You are a Python programming instructor who grades student Python exercises.n{format_instructions}n",
input_variables=[],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)
human_message_prompt = HumanMessagePromptTemplate(prompt=PromptTemplate(
template=grader_prompt,
input_variables=["instruction", "starter", "answer", "mark", "student_answer", "student_commit"],
)
)

chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

llm = ChatVertexAI(
model_name=model_name,
location=location,
max_output_tokens=1024,
temperature=0.2
)
runnable = chat_prompt | llm | parser

# Get the result
data = {"instruction": instruction,
"starter": starter,
"answer": answer,
"mark": mark,
"student_answer": student_answer,
"student_commit": student_commit}
output = runnable.invoke(data)
return output

The output of parser.get_format_instructions() in system prompt.

The output should be formatted as a JSON instance that conforms to the JSON schema below.

As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}
the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.

Here is the output schema:
```
{"properties": {"score": {"title": "Score", "description": "Score", "type": "integer"}, "comments": {"title": "Comments", "description": "Comments", "type": "string"}, "calculation": {"title": "Calculation", "description": "Calculation", "type": "string"}}, "required": ["score", "comments", "calculation"]}
```

grader_prompt.txt

Programming question

{instruction}




{starter}






{answer}





{student_answer}



Number of times code commit to GitHub: {student_commit}

Student add the code statement from Starter.
Student follows the question to add more code statements.

Rubric:
- If the content of StudentAnswer is nearly the same as the content of Starter, score is 0 and comment “Not attempted”. Skip all other rules.
- The maximum score of this question is {mark}.
- Compare the StudentAnswer and StandardAnswer line by line and Programming logic. Give 1 score for each line of correct code.
- Don't give score to Code statements provided by the Starter.
- Evaluate both StandardAnswer and StudentAnswer for input, print, and main function line by line.
- Explain your score calculation.
- If you are unsure, don’t give a score!
- Give comments to the student.

The output must be in the following JSON format:
"""
{{
"score" : "...",
"comments" : "...",
"calculation" : "..."
}}
"""

In scenarios where a failure occurs, manual intervention is necessary to rectify the issue. This can involve switching to a more robust model, fine-tuning the parameters, or making slight adjustments to the prompt. To initiate the troubleshooting process, it’s essential to create a backup of the batch job output. This serves as a crucial reference point for analysis and problem-solving.

backup_student_answer_df = student_answer_df.copy()

Manually execute the unsuccessful cases by adjusting the following code and running them again.

print(f"Total failed cases: {len(failed_cases)}")

orginal_model_name = model_name
# You may change to use more powerful model
# model_name = "codechat-bison@002"

if len(failed_cases) > 0:
print("Failed cases:")
for failed_case in failed_cases:
print(failed_case)
# Get row from student_answer_df by Directory
row = student_answer_df.loc[student_answer_df['Directory'] == failed_case["directory"]]
question = failed_case['question']
instruction = standard_answer_dict[question]["Instruction"]
starter = standard_answer_dict[question]["Starter"]
answer = standard_answer_dict[question]["Answer"]
mark = standard_answer_dict[question]["Mark"]
student_answer = row[question + " Content"]
print(student_answer)
student_commit = row[question + " Commit"]
result = score_answer(instruction, starter, answer, mark, student_answer, student_commit, temperature=0.3)
#update student_answer_df with result
row[question + " Score"] = result.score
row[question + " Comments"] = result.comments
row[question + " Calculation"] = result.calculation
# replace row in student_answer_df
# student_answer_df.loc[student_answer_df['Directory'] == failed_case["directory"]] = row
#remove failed case from failed_cases
failed_cases.remove(failed_case)

model_name = orginal_model_name

Based on experience, most of the cases can be resolved by changing some parameter.

The output of human_review.ipynb

Conclusion

In this approach, we leverage the power of a pretrained Language Model (LLM), specifically the code chat model from Vertex AI, to score students’ programming assignments. Unlike traditional unit testing methods, this technique allows for partial credit to be awarded even if the submitted code is not fully runnable.

Key to this process is the crafting of a well-structured chat prompt that incorporates essential information such as instructions, starter code, answer, mark, student answer, and student commit status. This prompt guides the LLM in evaluating the student’s code.

The LLM is configured with a low temperature setting to ensure precise and consistent scoring. A PydanticOutputParser is employed to extract the desired output format instructions in a Python object.

By seamlessly integrating these components, we establish a smooth workflow that efficiently manages prompts and utilizes the LLM’s capabilities. This enables accurate and reliable scoring of programming assignments, even for partially correct code submissions.

This approach addresses the challenges faced by educators in grading programming assignments, reduces manual effort, and promotes fair and consistent assessment.

Project collaborators include, Markus, Kwok,Hau Ling, Lau Hing Pui, and Xu Yuan from the IT114115 Higher Diploma in Cloud and Data Centre Administration and Microsoft Learn Student Ambassadors candidates

About the Author

Cyrus Wong is the senior lecturer of Hong Kong Institute of Information Technology and he focuses on teaching public Cloud technologies. A passionate advocate for cloud tech adoption in media and events — AWS Machine Learning Hero, Microsoft MVP — Azure, and Google Developer Expert — Google Cloud Platform.


Using Google Cloud Vertex AI Code Chat to automate programming test scoring was originally published in Google Developer Experts on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Using Google Cloud Vertex AI Code Chat to automate programming test scoring appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/18/using-google-cloud-vertex-ai-code-chat-to-automate-programming-test-scoring/feed/ 0
30 of the Best Free WordPress Blog Themes in 2023 https://prodsens.live/2023/10/07/30-of-the-best-free-wordpress-blog-themes-in-2023/?utm_source=rss&utm_medium=rss&utm_campaign=30-of-the-best-free-wordpress-blog-themes-in-2023 https://prodsens.live/2023/10/07/30-of-the-best-free-wordpress-blog-themes-in-2023/#respond Sat, 07 Oct 2023 13:24:44 +0000 https://prodsens.live/2023/10/07/30-of-the-best-free-wordpress-blog-themes-in-2023/ 30-of-the-best-free-wordpress-blog-themes-in-2023

If you‘ve got an opinion to share, you’ll need a blog to reach the right audience. However, choosing…

The post 30 of the Best Free WordPress Blog Themes in 2023 appeared first on ProdSens.live.

]]>
30-of-the-best-free-wordpress-blog-themes-in-2023

If you‘ve got an opinion to share, you’ll need a blog to reach the right audience. However, choosing the right free WordPress blog theme can be a challenge. I’ve chosen the wrong theme in the past. That decision left me frustrated, searching for support so I could create the blog of my dreams.

Let’s help you avoid the same mistake.

In this post, you’ll find 30 of the best free WordPress blog themes. Before we dive in, let’s explore what you should expect from free WordPress blog themes.

Grow Your Business With HubSpot's Tools for WordPress Websites

Table of Contents

Characteristics of Great WordPress Themes

If you aren’t a coder like me, then you need to choose your WordPress blog theme carefully. Why? Your theme determines the appearance of your website.

However, beautiful design isn’t the only characteristic of the best WordPress themes. When considering a WordPress theme, look out for these properties.

Site Security

Once you’re online, your web property is vulnerable to attack. Brands like Facebook and LinkedIn have had their fair share of data breaches. To combat ongoing threats to your WordPress website, use a theme that is as secure as possible.

While no WordPress theme is 100% secure, choosing one that’s regularly updated ensures your theme code is protected from new security vulnerabilities.

Responsive Design

With 54% of website traffic coming from mobile devices, using WordPress blog themes that are mobile-friendly becomes vital. Mobile-friendly themes improve user experience. They automatically resize your website content to fit different screens.

Flexible Customization

Great WordPress blog themes offer multiple customization options. This makes it easy for non-coders to design a unique site. Some blog themes even have demo sites you can import, edit, and turn into yours.

Detailed Documentation

Choosing a WordPress blog theme that has great documentation is important. Armed with tutorials and resources, you can fix any challenge with the theme without waiting for customer or tech support.

Plugin Compatibility

WordPress plugins prevent you from writing code if you need added website functionalities. That said, some plugins may not be compatible with some themes. A qualitative way to access a theme’s plugin compatibility is by checking the number of theme downloads.

If a blog theme has many downloads, it’s a good sign that it may be compatible with the plugins you aim to use.

Now that you know the characteristics of great WordPress blog themes, let’s look at some themes that fit these criteria.

1. Astra

Free WordPress blog theme: Astra

Astra is a fast and free WordPress blog theme that’s suitable for a variety of use cases. Its lightweight also makes Astra a quality option to consider when building a blog. The theme comes with several ready-to-use blog websites you can import, modify, and use.

Many bloggers are fine with the limited customization options for backgrounds, typography, and spacing in the free version of Astra. Astra’s Pro or Agency plan lets you access features like auto-loading previous posts, adding author sections, and removing featured image padding.

What we like: Astra’s compatibility with major page builders like Elementor, Beaver, and Brizy makes it one of the best WordPress themes for bloggers.

2. Kadence

Free WordPress blog theme: Kadence

Kadence is a lightweight WordPress blog theme that makes creating beautiful, fast-loading, and accessible websites a breeze. It features an easy-to-use, drag-and-drop header and footer builder to build any type of header in minutes.

What we like: Kadence stands out because of its clean blog styling, including featured image placements and sticky sidebar options. Plus, Kadence has a deep integration with the core block editor, so your content will match what you see in the admin panel.

3. Blog Way

Free WordPress theme: Blog Way

Blog Way is a simple and professional WordPress blog theme. It’s clean, well-coded, and has a modern layout. Blog Way is especially great for blogs, news sites, and travel sites. It has multiple customizable features and ensures high-quality performance to help boost your site traffic.

What we like: Blog Way comes with an option to change the color of your entire site and offers social links for you to connect your site with your social accounts.

4. OceanWP

Free WordPress theme: OceanWP

OceanWP is a popular multipurpose WordPress theme.

It offers a lightweight, SEO-friendly, and responsive foundation for blog building. Unlike Astra, OceanWP offers more customization options and gives you greater control over your blog’s design. This is one reason for its popularity among users.

OceanWP offers only 15 free responsive pre-made theme demos, which is fewer than what Astra offers. To get more, you’d have to upgrade to the premium version.

What we like: This WordPress blog theme is compatible with popular page builders, including Elementor, Beaver Builder, and Divi. It also supports several third-party plugins to help extend your site’s functionality.

5. Mesmerize

Free blog theme: Mesmerize

Mesmerize is a free blog theme that lets you customize your site without hassle. Start with a pre-built homepage and use the five header designs, slideshow capabilities, gradient overlays, and more to make the look and feel your own.

In addition, there are 30 ready-to-use content sections for you to build pages quickly and easily. Not to mention, there are helpful drag-and-drop features.

What we like: Mesmerize is mobile-responsive, and it works well with WooCommerce should you ever need to set up a store.

6. Kale

Kale WordPress blog theme for food bloggers

Kale is a free WordPress blog theme for food bloggers. You can choose from several feed displays to organize your written content and images of the dishes you’re featuring. The built-in social media sidebar menus and icons make it easy for visitors to locate, view, and follow your accounts.

What we like: This theme comes with a special front page that includes featured posts and a large highlight post. You can also show a banner or a post slider in the header.

7. Avant

Free WordPress blog theme: Avant

The Avant blog theme comes with seven different header styles, three footer styles, five blog layout templates, full site color settings, and much more built into the WordPress Customizer. Avant integrates seamlessly with WooCommerce and page builders like Elementor and SiteOrigin.

What we like: Avant comes with seven header layouts, five blog layouts, three footer layouts, and full-site color settings.

8. Blossom Feminine

Free WordPress blog theme: Blossom Feminine

Blossom Feminine is a free WordPress theme for creating a fashion, lifestyle, journal, travel, beauty, or food blog. The blog theme is mobile-friendly, search engine optimized, and fast. In addition, the theme is WooCommerce compatible, translation-ready, and comes with regular updates.

What we like: Have a newsletter you want to promote? The well-placed Newsletter section can help you to grow your email list. Keeping in touch with your visitors is a breeze.

9. Blossom Fashion

WordPress blog theme free: Blossom Fashion

Blossom Fashion is perfect for building a stylish blog without spending a penny. While free, the theme offers premium features like WooCommerce compatibility, font options, an advertisement widget, an Instagram section, and more.

What we like: The theme is easy to use and comes with extensive documentation. There’s also support if you need help.

10. Blossom Travel

WordPress blog theme free: Blossom Travel

Blossom Travel is a fast-loading and mobile-friendly WordPress theme for travel blogs. The free blog theme blends accessible design with extensive features like social media integrations, theme color options, and lightbox image styling.

Blossom Travel has an Instagram section, email subscription section, and social media widgets so visitors can easily connect with you.

What we like: Blossom Travel’s HTML map section lets your visitors visualize where you’ve traveled.

11. Blossom Pin

WordPress blog theme free: Blossom Pin

The Blossom Pin theme features a Pinterest-style design, using a vibrant masonry layout with three posts/page layout options. Its infinite scroll lets visitors browse without distraction. The free WordPress blog theme is SEO-optimized and easily customizable.

What we like: You can choose from its different colors and hundreds of Google fonts.

12. Elegant Pink by Rara Business

WordPress blog theme: Elegant Pink

Another dynamic Pinterest-like layout, Elegant Pink is a free and beautifully designed WordPress theme that combines soft colors with a clean layout to present your blog to the world. Elegant Pink also has a slider section above the masonry-design post on the homepage.

What we like: Elegant Pink is a responsive theme, so it’ll look great on every device.

13. Writee

Free WordPress theme: Writee

Writee is WordPress theme for photography or image-heavy blogs — the theme has a slider hero image feature, which allows you to include several full-width images. Writee also makes managing an online store simple with its WooCommerce integration.

What we like: If you have beautiful images on your site, Writee can help you showcase them. There are full-width or boxed sliders that can show off your images with style.

14. Hemingway

Free WordPress theme: Hemingway

Hemingway is a simple two-column blogging theme that keeps your content organized and easy to read. It includes a parallax scrolling feature, which adds an interactive, video-like experience to your blog pages.

Hemingway’s translation-ready feature comes with pre-made language files. Your website can be automatically translated into several other languages with just a click.

Best for: Bloggers who want a minimalist theme. If you want to upload images, change accent colors, and start writing, this theme is for you.

15. Ashe

Free WordPress theme: Ashe

Title: Ashe

Ashe is an elegant, customizable, and beginner-friendly WordPress theme built specifically for bloggers. It provides 14 image-heavy pre-made templates, and you can adapt it to any blog niche.

Its default shop page is one of its notable features. Pair this with the compatible WooCommerce integration, and you’ll have an ecommerce store running easily. The featured content slider and Instagram slider widget are other notable features. These help you showcase recent or popular content from your blog or Instagram page.

What we like: Ashe has a “promo boxes” feature for displaying ads and linked images. This WordPress theme is translation-ready, SEO-optimized, and works well with page builders such as Elementor and Beaver.

16. Neve

A best free WordPress theme: Neve

Neve is a powerful, free WordPress blog theme from ThemeIsle. It offers a fully responsive mobile-first design and extensive customization options to tailor your blog to your brand image.

What we like: Neve frequently updates its theme to guarantee the best security and access to new features. Its theme options panel will help you get started quickly.

17. GeneratePress

A best free WordPress theme: GeneratePress

GeneratePress is an optimized, super lightweight theme that focuses on speed and usability. By using only 30KB of resources, this theme gives you a quick-loading site.

The WordPress blog theme is also customizable. It’s compatible with page builders, such as the Gutenberg block editor, Beaver, and Elementor.

Unfortunately, you do not get starter sites on the free version of this theme. So you’ll have to build from scratch.

What we like: GeneratePress is famous for its over-the-top customer service — even the theme’s developer, Tom Usborne, would sometimes answer questions in the support forum.

18. Total

Total WordPress free theme

Total is a blogging theme with a masonry-style layout, which places your latest three, six, or nine blog posts in a grid format. There’s also a portfolio section if you want to share some of your artistic work.

What we like: Total is SEO-friendly, compatible with the most popular page builder plugins, and has a one-click demo import to get you up and running fast.

19. Spacious

Spacious WordPress free theme

Spacious offers four page layouts, two templates, four blog layouts, and several custom widgets and widget areas to choose from. Building your site with Spacious is easy because their downloadable demo sites are available for inspiration and support.

What we like: Spacious is built with speed in mind. Sites built with spacious load in one second.

20. Blog Diary

Blog Diary WordPress theme

Blog Diary is a lightweight and minimalistic WordPress theme for trendy food or travel blogs. It comes with slider functionality and color-picking options, and it is easy to get up and running on the fly. In addition, it’s mobile-responsive and compatible with the Gutenberg editor.

Best for: Writers who want to focus on content. This theme is designed to be set up quickly, without trial and error or experimentation.

21. Zakra

Zakra WordPress theme

Zakra is another fast and lightweight multipurpose theme. It includes 10+ free starter sites, some of which are designed specifically for blogs. The designs are clean and pleasing to the eyes.

This theme’s Gutenberg compatibility is one of its selling points. It also works well with other page builders, including Elementor and Brizy. So the templates are easy to customize.

What we like: This theme is also SEO-friendly, translation-ready, and frequently updated to ensure your site is as secure as humanly possible.

22. Editorial

Editorial WordPress theme

The Editorial blog theme is visually engaging, simple to use, and flexible enough to organize large amounts of editorial content in a way that won’t overwhelm readers. Editorial also comes with a variety of convenient widgets that let you easily customize your page sections, no coding needed.

What we like: Editorial comes with a customizer that allows you to change most of the theme settings easily with live previews.

23. Brilliant

Brilliant WordPress blog theme

Brilliant is a blog and online magazine theme that allows you to artistically pair your blog posts with photo or video content. You can add or edit your own custom logo on your homepage, as well as customize your theme’s accent colors to match your branding.

What we like: Brilliant is translation-ready, so visitors can read your content in different languages.

24. Poseidon

Poseidon WordPress theme

If you’re looking to include large, professional-looking photographs on your blog, Poseidon is the WordPress theme for you — this theme offers a full-width image slideshow on the homepage. The layout is mainly white to create a spacious, organized look.

What we like: Poseidon includes completely customizable navigation bars to enhance user experience and improve your site’s configuration.

25. Author

Author WordPress theme for blogging

Author is a straightforward theme suitable for all blog types, from business to photography to ecommerce. Its minimalist look helps readers to focus on your content easily. What’s unique about this theme is its design, which is not just for readability but for accessibility.

What we like: Author comes with WooCommerce support for eCommerce stores.

26. ColorMag

Free WordPress theme for bloggers: ColorMag

ColorMag is an elegant WordPress theme that suits news blogs and magazine sites. Its 8+ free pre-built demos are neatly ordered, visually appealing, and have a professional feel.

Each demo includes multiple ad spaces, so you can monetize your site. Additionally, ColorMag supports the sticky menus. This makes it easy for users to navigate your site, especially on pages with long-form content.

What we like: ColorMag is compatible with major page builders, such as Elementor, Gutenberg, and Beaver Builder. It works seamlessly with WooCommerce, too.

27. Go

Free WordPress theme for bloggers: Go

Go is a WordPress blog theme that’s minimalist and compatible with the Gutenberg builder. Its simple interface makes it suitable for code-averse people who want to set up their websites quickly.

What we like: This WordPress theme also has great fonts, making your web content readable and offering users a great experience.

28. SiteOrigin Unwind

Free WordPress theme for bloggers: SiteOrigin

SiteOrigin Unwind is a free WordPress blog theme that’s customizable with its page builder plugin. This WordPress theme is excellent if you want a website with a simplified blogging layout.

What we like: Unwind also has great code, which means your website loads faster. Your chances of ranking high in search engines increase with load time. Plus, this theme lets you build custom headers and backgrounds.

29. Sydney

Free WordPress theme for bloggers: Sydney

Whether you are a business owner or freelancer, the Sydney WordPress blog theme lets you create an awesome website with drag-and-drop page builders like Elementor. Its free version has six starter websites that you can import and modify to fit your brand.

What we like: Design and customizing this WordPress theme is easy. You can use its sticky navigation menu, upload your logo, access all Google Fonts, set your header, and more.

30. Maxwell

Free WordPress theme for bloggers: Maxwell

If you need a WordPress blog theme that’s simple yet elegant, Maxwell might be a good fit. This WordPress theme has great typography and a magazine-style layout that’s easy to customize.

What we like: Maxwell has several post layouts and features like a dropdown menu, post slider, full width, content styles, social share buttons, and more. This blog theme is also responsive, and you can update it from your WordPress dashboard with just one click.

Benefits of Excellent WordPress Themes

Great WordPress themes offer a lot of benefits. Here are three of them.

You can set up your site quickly.

Free WordPress blog themes with demo sites will help you design a unique website in a few hours. Even if you don’t like the demo websites, you can use them to get design inspiration.

Themes are easy to use.

Every good WordPress blog theme is easy to use. Most of them have their own in-built website builders. But if you prefer another builder, you can use Gutenberg, Elementor, Beaver, etc.

You’ll have improved website visibility

A good WordPress theme ensures your site is responsive, easy to navigate, and compatible with multiple browsers. These features make your site load fast. And fast loading time is a positive signal that improves your site’s SEO.

Using a Free WordPress Blog Theme

A free blogging WordPress theme will help you create a unique, functional, and eye-catching place for your content. Each theme offers features, layouts, and styling that set them apart. So consider the overall blog design you’re going for when picking your ideal theme.

Afterward, install your theme, add content, and customize your site to create a great user experience that keeps readers returning for more.

Editor’s note: We originally published this post in December 2018 and we’ve updated it for comprehensiveness.

Use HubSpot tools on your WordPress website and connect the two platforms  without dealing with code. Click here to learn more.

The post 30 of the Best Free WordPress Blog Themes in 2023 appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/10/07/30-of-the-best-free-wordpress-blog-themes-in-2023/feed/ 0