Harshita Shrivastava, Author at ProdSens.live https://prodsens.live/author/harshita-shrivastava/ News for Project Managers - PMI Thu, 16 May 2024 16:20:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Harshita Shrivastava, Author at ProdSens.live https://prodsens.live/author/harshita-shrivastava/ 32 32 Jornada pessoal em Machine Learning https://prodsens.live/2024/05/16/jornada-pessoal-em-machine-learning/?utm_source=rss&utm_medium=rss&utm_campaign=jornada-pessoal-em-machine-learning https://prodsens.live/2024/05/16/jornada-pessoal-em-machine-learning/#respond Thu, 16 May 2024 16:20:06 +0000 https://prodsens.live/2024/05/16/jornada-pessoal-em-machine-learning/ jornada-pessoal-em-machine-learning

Olá, Mentes Tech! Quero muito contar como está sendo minha jornada estudando e construindo IA, matemática e estatística…

The post Jornada pessoal em Machine Learning appeared first on ProdSens.live.

]]>
jornada-pessoal-em-machine-learning

Olá, Mentes Tech!

Quero muito contar como está sendo minha jornada estudando e construindo IA, matemática e estatística foi uma matéria que me cativou bastante durante a graduação, e a pouco tempo venho me aventurando com tensor flow + python para construção de modelos.

E está sendo incrível desenvolver, e com minha bagagem como dev facilitou alguns caminhos, principalmente na modelagem de dados, compreender profundamente estrutura de dados e algoritmo, tem me permitido deslanchar no desenvolvimento.

Está semana, eu conclui meu primeiro modelo, ele ainda está em treinamento, mas já é possível ver os resultados, como estou em fase de aprendizado (soa engraçado neh, aprendendo para ensinar), e contando um pouco mais é exatamente o tipo de ML que eu construí por aprendizado por reforço.

E o que ela faz?

Neste post aqui eu conto sobre a plataforma que estou construindo, e este modelo será para refinar as entradas e futuramente predizer o próximo fechamento.

Massa ou não ?

imagem da ia criptix

Por enquanto estou usando aprendizado por reforço, mas quero testar também, rede neural recorrente (RNN) com uma arquitetura de rede LSTM (Long Short-Term Memory).

E aí o que achou ?
Bora falar sobre ?

estou bastante empolgado.

The post Jornada pessoal em Machine Learning appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/05/16/jornada-pessoal-em-machine-learning/feed/ 0
What was your win this week? https://prodsens.live/2024/03/22/what-was-your-win-this-week-9/?utm_source=rss&utm_medium=rss&utm_campaign=what-was-your-win-this-week-9 https://prodsens.live/2024/03/22/what-was-your-win-this-week-9/#respond Fri, 22 Mar 2024 11:21:00 +0000 https://prodsens.live/2024/03/22/what-was-your-win-this-week-9/ what-was-your-win-this-week?

Hey folks! 😀 It’s Friday again. Hope y’all have had a good week! 🙌 Looking back on this…

The post What was your win this week? appeared first on ProdSens.live.

]]>
what-was-your-win-this-week?

Hey folks! 😀

It’s Friday again. Hope y’all have had a good week! 🙌

Looking back on this past week, what was something you were proud of accomplishing?

All wins count — big or small 🎉

Examples of ‘wins’ include:

  • Starting a new project
  • Fixing a tricky bug
  • Eating a delicious cupcake 🧁

Harry Potter flys forward on a broom into the air and towards a big cupcake that he gobbles up. The video is doctored and is a scene from one of the Harry Potter movies... there was no cupcake but it was superimposed into the frame to make it look like a screaming Harry on a broom flys up to it and eats it.

Did you know we feature wins in our “Wins of the Week” email every Friday? Subscribe to our weekly newsletter emails to get wins delivered right to your inbox! 💌

The post What was your win this week? appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/03/22/what-was-your-win-this-week-9/feed/ 0
The dark art of product pricing: Product leader nuggets https://prodsens.live/2024/03/22/the-dark-art-of-product-pricing-product-leader-nuggets/?utm_source=rss&utm_medium=rss&utm_campaign=the-dark-art-of-product-pricing-product-leader-nuggets https://prodsens.live/2024/03/22/the-dark-art-of-product-pricing-product-leader-nuggets/#respond Fri, 22 Mar 2024 11:20:44 +0000 https://prodsens.live/2024/03/22/the-dark-art-of-product-pricing-product-leader-nuggets/ the-dark-art-of-product-pricing:-product-leader-nuggets

Product pricing has been front and centre in mainstream news as of late. What are the most common…

The post The dark art of product pricing: Product leader nuggets appeared first on ProdSens.live.

]]>
the-dark-art-of-product-pricing:-product-leader-nuggets

Product pricing has been front and centre in mainstream news as of late. What are the most common pitfalls with pricing, and how can product teams get it right? Andrew Skotzko, Product Leadership Coach and Fractional CPO, shares his thoughts. Read more »

The post The dark art of product pricing: Product leader nuggets appeared first on Mind the Product.

The post The dark art of product pricing: Product leader nuggets appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/03/22/the-dark-art-of-product-pricing-product-leader-nuggets/feed/ 0
BMF 📹 + Hugging Face🤗, The New Video Processing BFFs https://prodsens.live/2024/02/01/bmf-%f0%9f%93%b9-hugging-face%f0%9f%a4%97-the-new-video-processing-bffs/?utm_source=rss&utm_medium=rss&utm_campaign=bmf-%25f0%259f%2593%25b9-hugging-face%25f0%259f%25a4%2597-the-new-video-processing-bffs https://prodsens.live/2024/02/01/bmf-%f0%9f%93%b9-hugging-face%f0%9f%a4%97-the-new-video-processing-bffs/#respond Thu, 01 Feb 2024 02:20:34 +0000 https://prodsens.live/2024/02/01/bmf-%f0%9f%93%b9-hugging-face%f0%9f%a4%97-the-new-video-processing-bffs/ bmf-+-hugging-face,-the-new-video-processing-bffs

TL;DRif you want to test this tutorial before we start, try it out here Hugging Face has created…

The post BMF 📹 + Hugging Face🤗, The New Video Processing BFFs appeared first on ProdSens.live.

]]>
bmf-+-hugging-face,-the-new-video-processing-bffs

TL;DRif you want to test this tutorial before we start, try it out here

Hugging Face has created a major shift in the AI community. It fuels cutting-edge open source machine learning/AI models and datasets. The Hugging Face community is thriving with great ideas and innovations to the point where the possibilities seem endless.

Hugging Face is revolutionizing Natural Language Processing (NLP) with state-of-the-art solutions for tasks like translation, summarization, sentiment analysis, and contextual understanding. Its arsenal of pre-trained models makes it a robust platform for diverse NLP tasks, streamlining the integration of machine learning functionalities. Hugging Face simplifies the training, evaluation, and deployment of models with a user-friendly interface. The more I used Hugging Face in my own personal projects, the more I felt inspired to combine it with Babit Multimedia Framework (BMF).

If you’re reading this and are not familiar with BMF, it’s a cross-platform multimedia processing framework by ByteDance Open Source. Currently, BMF is used to process over 2 billion videos a day across multiple social media apps. Can this get complex? Yes, it sure can. However, in this article, I’ll break it all down, so you know how to create unique experiences across any type of media platform.

Why BMF?

BMF stands out with its multilingual support, putting it ahead in the video processing game. BMF excels in various scenarios like video transcoding, editing, videography, and analysis. The integration of advanced technologies like Hugging Face with BMF is a game-changer for complex multimedia processing challenges.

Before we get started with the tutorial, let me share with you some ideas I envision coming to life with BMF + Hugging Face:

  • Multimedia Content Analysis: Leveraging Hugging Face’s NLP models, BMF can delve deep into textual data associated with multimedia content, like subtitles or comments, for richer insights.
  • Accessibility: NLP models can automatically generate video captions, enhancing accessibility for the hard-of-hearing or deaf community.
  • Content Categorization and Recommendation: These models can sort multimedia content based on textual descriptions, paving the way for sophisticated recommendation systems.
  • Enhanced User Interaction: Sentiment analysis on user comments can offer valuable insights into user engagement and feedback for content improvement.

What now?

Open Source AI is creating the building blocks of the future. Generative AI impacts all industries, and this leads me to think about how generative AI can impact the future of broadcasting and video processing. I experimented with BMF and Hugging Face to create the building blocks for a broadcasting service that uses AI to create unique experiences for viewers. So, enough about the background, let’s get it going!

What we’ll build

Follow along, as we’ll build a video processing pipeline with BMF that uses the runwayml/stable-diffusion-v1-5 model to generate an image to display as an overlayed image ontop of an encoded video. If that didn’t make sense, don’t worry, here’s a picture for reference:

So why is this significant? The image of the panda is AI generated and combined with BMF , we can put it down a processing pipeline to put it on top of our video. Think about! There could be a scenario where you are creating a video broadcasting service and during live streams, you’d like to display images quickly and display them for your audience with a simple prompt. There can also be a scenario where you are using BMF to edit your videos and you’d like to add some AI-generated art. This tutorial is just one example. BMF combined with models created by the Hugging Face community opens up a whole new world of possibilities.

Let’s Get Started

Prerequisites:

  • A GPU(I’m using google Colab A100 GPU. You can also use v100 or TP4 GPUs but they will just run a bit slower)
  • Install BMFGPU
  • Python 3.9-3.10 (strictly required to work with bmf)
  • FFMPEG

You can find all the BMF installation docs here. The docs will highlight more system requirements if you decide to run things locally.

Getting Started

Begin by ensuring that essential toolkits like Hugging Face Transformers and BMF are installed in your Python environment. Use pip for installation:

Initial Setup

  1. First, we’ll clone the following repository to get our video that we want to process(If you are coding along and want to use your own video, create your own repo and simply add a video file, preferably a short video and add to easily clone just like I did. You can also just save the video to the directory you’re coding in.)
git clone https://github.com/Joshalphonse/Bmf-Huggingface.git
  1. Install BabitMF-GPU to accelerate your video processing pipeline with BMF’s GPU capablities
pip install BabitMF-GPU
  1. Install the following dependencies
pip install requests diffusers transformers torch accelerate scipy safetensors moviepy Pillow tqdm numpy modelscope==1.4.2 open_clip_torch pytorch-lightning
  1. Install ffmpeg.BMF framework utilizes the FFmpeg video decoders and encoders as the built-in modules for video decoding and encoding. It’s necessary for users to install supported FFmpeg libraries before using BMF.
sudo apt install ffmpeg
dpkg -l | grep -i ffmpeg
ffmpeg -version

This package below is installed to show the BMF C++ logs in the colab console, otherwise only python logs are printed. This step is not necessary if you’re not in a Colab or iPython notebook environment.

pip install wurlitzer
%load_ext wurlitzer
  1. Create a new folder in the directory for the github repository we cloned. We’ll need this path later on.
import sys
sys.path.insert(0, '/content/Bmf-Huggingface')
print(sys.path)

Creating the Module

Now it’s time for the fun part. We’ll create a module to process the video.Here’s the module I created and I’ll break it down for you below.

import bmf
from bmf import bmf_sync, Packet
from bmf import SubGraph
from diffusers import StableDiffusionPipeline
import torch

model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a photo of a panda eating waffles"
image = pipe(prompt).images[0]

image.save("panda_photo.png")

class video_overlay(SubGraph):

    def create_graph(self, option=None):
        # create source stream
        self.inputs.append('source')
        source_stream = self.graph.input_stream('source')
        # create overlay stream
        overlay_streams = []
        for (i, _) in enumerate(option['overlays']):
            self.inputs.append('overlay_' + str(i))
            overlay_streams.append(self.graph.input_stream('overlay_' + str(i)))

        # pre-processing for source layer
        info = option['source']
        output_stream = (
            source_stream.scale(info['width'], info['height'])
                .trim(start=info['start'], duration=info['duration'])
                .setpts('PTS-STARTPTS')
        )

        # overlay processing
        for (i, overlay_stream) in enumerate(overlay_streams):
            overlay_info = option['overlays'][i]

            # overlay layer pre-processing
            p_overlay_stream = (
                overlay_stream.scale(overlay_info['width'], overlay_info['height'])
                    .loop(loop=overlay_info['loop'], size=10000)
                    .setpts('PTS+%f/TB' % (overlay_info['start']))
            )

            # calculate overlay parameter
            x = 'if(between(t,%f,%f),%s,NAN)' % (overlay_info['start'],
                                                 overlay_info['start'] + overlay_info['duration'],
                                                 str(overlay_info['pox_x']))
            y = 'if(between(t,%f,%f),%s,NAN)' % (overlay_info['start'],
                                                 overlay_info['start'] + overlay_info['duration'],
                                                 str(overlay_info['pox_y']))
            if overlay_info['loop'] == -1:
                repeat_last = 0
                shortest = 1
            else:
                repeat_last = overlay_info['repeat_last']
                shortest = 1

            # do overlay
            output_stream = (
                output_stream.overlay(p_overlay_stream, x=x, y=y,
                                      repeatlast=repeat_last)
            )

        # finish creating graph
        self.output_streams = self.finish_create_graph([output_stream])

Code Breakdown:

Importing Required Modules:

import bmf
from bmf import bmf_sync, Packet
from bmf import SubGraph
from diffusers import StableDiffusionPipeline
import torch
  • bmf and its components are imported to harness the functionalities of the Babit Multimedia Framework for video processing tasks.
  • SubGraph is a class in BMF, used to create a customizable processing node.
  • StableDiffusionPipeline is imported from the diffusers library that allows the generation of images using text prompts.
  • torch is the PyTorch library used for machine learning applications, which Stable Diffusion relies on.

Configuring the Stable Diffusion Model:

model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
  • The Stable Diffusion model is loaded with the specified model_id.
  • The torch_dtype parameter ensures the model uses lower precision to reduce memory usage.
  • .to("cuda") moves the model to GPU for faster computation if CUDA is available.

Generating an Image Using Stable Diffusion:

prompt = "a photo of a panda eating waffles"
image = pipe(prompt).images[0]
image.save("panda_photo.png")
  • We then set a text prompt to generate an image of “a photo of a panda eating waffles”.
  • The image is created and saved to “panda_photo.png”.

Defining a Custom BMF SubGraph for Video Overlay:

class video_overlay(SubGraph):
  • video_overlay class is derived from SubGraph. This class will define a custom graph for video overlay operations.

Creating the Graph:

def create_graph(self, option=None):
  • create_graph method is where the actual graph (workflow) of the video and overlays are constructed.

Processing Source and Overlay Streams:

self.inputs.append('source')
source_stream = self.graph.input_stream('source')
overlay_streams = []
  • Registers input streams for the source and prepares a list of overlay input streams.

Scaling and Trimming Source Video:

info = option['source']
output_stream = (
    source_stream.scale(info['width'], info['height']).trim(start=info['start'], duration=info['duration']).setpts('PTS-STARTPTS'))
  • The source video is scaled and trimmed according to the specified option. Adjustments are made for the timeline placement.

Scaling and Looping Overlay Streams:

p_overlay_stream = (
    overlay_stream.scale(overlay_info['width'], overlay_info['height']).loop(loop=overlay_info['loop'], size=10000).setpts('PTS+%f/TB' % (overlay_info['start'])))
  • Each overlay is scaled and looped as needed, providing a dynamic and flexible overlay process.

Overlaying on the Source Stream:

output_stream = (
    output_stream.overlay(p_overlay_stream, x=x, y=y,
                          repeatlast=repeat_last))
  • Overlays are added to the source stream at the calculated position and with the proper configuration. This allows multiple overlays to exist within the same timeframe without conflicts.

Finalizing the Graph:

self.output_streams = self.finish_create_graph([output_stream])
  • Final output streams are set, which concludes the creation of the graph. Now, after this, it’s time for us to encode the video and display it how we want to.

Applying Hugging Face Model

Let’s add our image as an overlay to the video file. Let’s break down each section of the code to explain how it

input_video_path = "https://dev.to/content/Bmf-Huggingface/black_and_white.mp4"
logo_path = "https://dev.to/content/panda_photo.png"
output_path = "./complex_edit.mp4"
dump_graph = 0

duration = 10

overlay_option = {
    "dump_graph": dump_graph,
    "source": {
        "start": 0,
        "duration": duration,
        "width": 1280,
        "height": 720
    },
    "overlays": [
        {
            "start": 0,
            "duration": duration,
            "width": 300,
            "height": 200,
            "pox_x": 0,
            "pox_y": 0,
            "loop": 0,
            "repeat_last": 1
        }
    ]
}

my_graph = bmf.graph({
    "dump_graph": dump_graph
})

logo_1 = my_graph.decode({'input_path': logo_path})['video']

video1 = my_graph.decode({'input_path': input_video_path})

overlay_streams = list()
overlay_streams.append(bmf.module([video1['video'], logo_1], 'video_overlay', overlay_option, entry='__main__.video_overlay')[0])

bmf.encode(
    overlay_streams[0],
    video1['audio'],
    {"output_path": output_path}
    ).run()

Let’s break this down too

Defining Paths and Options:

input_video_path = "https://dev.to/content/Bmf-Huggingface/black_and_white.mp4"
logo_path = "https://dev.to/content/panda_photo.png"
output_path = "./complex_edit.mp4"
dump_graph = 0
duration = 10
  • input_video_path: Specifies the file path to the input video.
  • logo_path: File path to the image (logo) you want to overlay on the video.
  • output_path: The file path where the edited video will be saved.
  • dump_graph: A debugging tool in BMF that can be set to 1 to visualize the graph but is set to 0 here, meaning no graph will be dumped.
  • duration: The duration in seconds for the overlay to be visible in the video.

Overlay Configuration:

overlay_option = {
    "dump_graph": dump_graph,
    "source": {
        "start": 0,
        "duration": duration,
        "width": 1280,
        "height": 720
    },
    "overlays": [
        {
            "start": 0,
            "duration": duration,
            "width": 300,
            "height": 200,
            "pox_x": 0,
            "pox_y": 0,
            "loop": 0,
            "repeat_last": 1
        }
    ]
}
  • overlay_option: A dictionary that defines the settings for the source video and the overlay.
  • For the source, the width and height you want to scale the video to, and when the overlay should start and end are specified.
  • For the overlays, detailed options such as position, size, and behavior (like loop and repeat_last) are defined.

Creating a BMF Graph:

my_graph = bmf.graph({"dump_graph": dump_graph
})
  • my_graph is an instance of BMF graph which sets up the processing graph (pipeline), with dump_graph passed as an option.

Decoding the Logo and Video Streams:

logo_1 = my_graph.decode({'input_path': logo_path})['video']
video1 = my_graph.decode({'input_path': input_video_path})
  • The video and logo are loaded and decoded to be processed. This decoding extracts the video streams to be used in subsequent steps.

Creating Overlay Streams:

overlay_streams = list()
overlay_streams.append(bmf.module([video1['video'], logo_1], 'video_overlay', overlay_option, entry='__main__.video_overlay')[0])
  • An empty list overlay_streams is created to hold the video layers.
  • The bmf.module function is used to create an overlay module, where the source video and logo are processed using the video_overlay class defined previously with the corresponding options.

Encoding the Final Output:

bmf.encode(
    overlay_streams[0],
    video1['audio'],{"output_path": output_path}).run()
  • The final video stream, with the overlay applied, and the original audio from the input video are encoded together into a new output file specified by output_path.
  • The .run() method is called to execute the encoding process.

Our final output should look something like this:

Thats it! We’ve explored a practical example of utilizing Babit Multimedia Framework (BMF) a video editing task using AI to create an image we can overlay on a video. Now you know how to set up a BMF graph, decode the input streams, create overlay modules, and finally encode the edited video with the overlay in place. In the future, I will consider adding more AI models, like one to improve the resolution, or even a model that creates a video from text. Through the power of BMF and Hugging Face open source models, users can create complex video editing workflows with overlays that can dynamically change over time, offering vast creative possibilities.

Try it out on CoLab and tell us what you think:

https://colab.research.google.com/drive/1eQxiZc2vZeyOggMoFle_b0xnblupbiXd?usp=sharing

Join us on our ByteDance Open Source Discord Server!

The post BMF 📹 + Hugging Face🤗, The New Video Processing BFFs appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/02/01/bmf-%f0%9f%93%b9-hugging-face%f0%9f%a4%97-the-new-video-processing-bffs/feed/ 0
Discover da RocketSeat – Módulo 2 https://prodsens.live/2023/12/15/discover-da-rocketseat-modulo-2/?utm_source=rss&utm_medium=rss&utm_campaign=discover-da-rocketseat-modulo-2 https://prodsens.live/2023/12/15/discover-da-rocketseat-modulo-2/#respond Fri, 15 Dec 2023 03:25:09 +0000 https://prodsens.live/2023/12/15/discover-da-rocketseat-modulo-2/ discover-da-rocketseat-–-modulo-2

Fala Dev, tudo bem? Para quem viu o post anterior, eu estou escrevendo esses posts para auxiliar na…

The post Discover da RocketSeat – Módulo 2 appeared first on ProdSens.live.

]]>
discover-da-rocketseat-–-modulo-2

Fala Dev, tudo bem?

Para quem viu o post anterior, eu estou escrevendo esses posts para auxiliar na aprendizagem de quem está cursando o Discover da RocketSeat (https://www.rocketseat.com.br/discover) e também como uma forma de revisão e fixação de conteúdo. Tenham em mente que essas foram as minhas anotações pessoais e contêm diferenças das aulas, portanto, podem ser utilizadas como método auxiliar de aprendizagem, mas não substituem as aulas.

Sem mais delongas, vamos lá?

Módulo 2 – Aprendendo a aprender

AULA 1 – INTRO – 0:28

Técnicas de estudos para o aprendizado acelerado na programação – Apelidado de REPROGRAMAR.

AULA 2 – PLANEJAMENTO E ORGANIZAÇÃO – 3:52

É importantíssimo se planejar, visualizar qual o seu objetivo, ou seja, de terminar o projeto do discover, mas sem ter pressa ao fazer.
Ajustar a expectativa, por exemplo, em vídeo o projeto pode ter 1h, na prática pode levar muito mais que o dobro.
Ao definir o objetivo, é preciso se organizar com um cronograma realístico com uma agenda e horários.

AULA 3 – APRENDIZADO E METODOLOGIA PARE – 6:55

Processo de aprendizado, feito pelo cérebro.
• O conhecimento, que é o primeiro contato com o conteúdo.
• O reconhecimento, ver novamente o conteúdo várias vezes.
• A solidificação, um processo demorado.

Formas passivas e ativas de aprendizado – Você pode assistir, ler e ouvir, mas isso não é um método ativo.

Metodologia PARE, criada no ano de 2022, pelo educador Mayk Brito.

P = Perguntar, questionar, pesquisar.
A = Anotar, escrever, desenhar, criar um mapa mental.
R = Revisar, reforçar, tende a ser um processo solitário.
E = Explicar, debater, defender um ponto, argumentar.

E depois de estudar, você precisa PARAR, ter pausas para descanso, meditar e respirar.

AULA 4 – APRENDIZADO NA PRÁTICA – 01:37

Na prática, ao fazer anotações você já tem quase um roteiro, então você pode criar vídeos explicando o conteúdo, gravar audios com as anotações e questionamentos, escrever artigos, participar de debates em comunidades, grupos e fóruns, pesquisar informações e resolver desafios.

AULA 5 – COMO ESTUDAR DE MANEIRA EFICIENTE – 09:25

Para estudar, é preciso um conjunto de variáveis, entre elas:

• Foco – A concentração é importante, portanto escolha momentos calmos do seu dia, em que você não se distraia com facilidade.
– Focar no que é importante, se você estuda um assunto, foque nele ao invés de pular de galho em galho.
– E não tente ser multitarefa com relação aos estudos, você talvez não consiga dar atenção plena a alguma atividade que está exercendo.

• Motivação – A dopamina precisa estar regulada para gerar vontade de entender sobre determinado assunto.
– É imprescindível ter disciplina, não é sempre que você estará 100% para realizar as atividades, e aí que a disciplina entra, por isso, caso já tenha organizado sua agenda para os estudos, tente manter-se no ritmo que você criou.

• Horário – Procure descobrir qual o horário que você estará mais alerta e menos cansado, devido a química do cérebro, raros são os casos de pessoas que são realmente noturnas.

• Descanso – O descanso serve pra restaurar e consolidar as informações novas que recebeu durante o dia e o intencional é uma pausa durante o dia.

• Saúde – Envolve a física ao cuidar da alimentação, tomar um sol e não ficar só dentro do quarto, se hidratar, tomar ar puro ao sair de casa e praticar exercício regularmente.
– Lidar com o mental é importantíssimo, pois cada vez mais temos pessoas sendo diagnosticadas com ansiedade e que se sentem apreensivas quanto ao seu futuro, a ansiedade e o medo podem incapacitar, mas você não pode deixar isso te paralisar.
– Requer também lidar com o espiritual, a meditação é uma forma de lidar com o seu corpo e tentar lidar com o que afeta a sua saúde mental, outra maneira é a respiração, pausar, inspirar profundamente por uns 4 segundos e depois soltar, ao fazer isso, você desacelera e a oração é uma questão mais religiosa para quem pratica.

AULA 6 – TÉCNICAS DE ESTUDO EM PROGRAMAÇÃO – 09:30

A escola te ensina a memorizar conteúdos, a programação te ensina a pesquisar e processar as informações que voce pesquisa.
• Pomodoro – são 25 minutos ininterruptos de estudo, desligando o celular e estando em um ambiente calmo e que você consiga se concentrar com facilidade, com 5 minutos de pausa em que você faz qualquer atividade não relacionada a programação, repetir esse circuito quatro vezes e após isso você fecha 1 pomodoro. Caso queira iniciar outro, o recomendado é que descanse por 15 minutos entre um e outro. No começo, é provável que você você não consiga fazer os circuitos de 25 minutos e só consiga fazer 10/15 e isso é ok, leva tempo até se acostumar, mas caso consiga fazer, você pode ir aumentando o tempo e o máximo recomendado é de 50 minutos, já que mais do que isso pode impactar sua coluna negativamente.
• Lozanov – música baixa de fundo enquanto estuda, é uma sem voz como lo-fi e clássica (60-70bpm). É boa para revisar.
Você pode usar o combo do Pomodoro com o Lozanov.
• Feymann – Estudar assunto e explicar para uma pessoa que não tem conhecimento desse assunto, identificar quais são as falhas na sua explicação, voltar a estudar o assunto e melhorar.
• Active Recall – Assistir a aula, fazer anotações curtas (bullet points, mapas mentais, prints) e puxar da memória para codar ou usar as anotações.

AULA 7 – REVISÃO E FINALIZAÇÃO – 01:46

Nesse módulo, foram abordado tópicos como planejamento, organização, forma ativa de estudar, tornar seu momento ideal para estudo e técnicas para ajudar.

AULA 8 – QUIZ

AULA 9 – PREPARE-SE PRO QUE VEM POR AÍ – 01:09

Você aprenderá nas próximas aulas os detalhes de como upar informações pro GitHub – inserindo o Read Me, entenderá o que é um HTML, CSS e JavaScript.

The post Discover da RocketSeat – Módulo 2 appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/12/15/discover-da-rocketseat-modulo-2/feed/ 0
Moonly weekly progress update #52 – New Moonly Collection https://prodsens.live/2023/06/08/moonly-weekly-progress-update-52-new-moonly-collection/?utm_source=rss&utm_medium=rss&utm_campaign=moonly-weekly-progress-update-52-new-moonly-collection https://prodsens.live/2023/06/08/moonly-weekly-progress-update-52-new-moonly-collection/#respond Thu, 08 Jun 2023 23:24:59 +0000 https://prodsens.live/2023/06/08/moonly-weekly-progress-update-52-new-moonly-collection/ moonly-weekly-progress-update-#52-–-new-moonly-collection

Moonly weekly progress update #52 — New Moonly Collection We are working very hard and step by step,…

The post Moonly weekly progress update #52 – New Moonly Collection appeared first on ProdSens.live.

]]>
moonly-weekly-progress-update-#52-–-new-moonly-collection

Moonly weekly progress update #52 — New Moonly Collection

We are working very hard and step by step, we are bringing all the new features to completion! We are very excited about all this but we still have so many cool things planned!

We are preparing something new and very interesting for a long time. It was a secret until now! A new Moonly collection called Karamendos is on the horizon.
The collection will be something very unique and special! We will announce everything very soon and start onboarding the fresh blood to this NFT project that will be the bridge between Web2 and Web3 worlds!

Application automatio.co will play a very important and big role in this project, but stay tuned, more about all this is coming soon…

Weekly devs progress:

  • Fixed the black screen issue with the render

  • Merged some stuff together and deployed it

  • Fixed rare issues on the mint scraper

  • Debugged a profile with Moonly holder issue — fixed some irregularities

  • Reclaimed a lot of disk space on both bare-metal servers

  • Finished merging the sniper bot — need to fix the additional issues

  • Finished frontend of Karamendos

  • Finished merging the cross-site auth with staging and fixed most of the conflicts

  • Added checking if videos had already been generated and not recreating them

  • Debugged workflow on staging and made some fixes

  • Finished fixing scrapers-mints package

Holder Verification Bot (HVB):

  • Developed the auto-creation rules feature

  • Found some performance issues for a large number of rules

  • Optimized and reduce API requests for initial render on the HVB page

  • Deployed the multi-wallet checker & guilds cache feature with some UI improvements

  • Deployed the latest changes at the test bot server

  • Improved HVB spinner — added on each search box suggestion

  • Fixed the completed job removal issue

  • Resolved search box suggestion scrolling issue

  • Fixed guilds role fetch issue on the server change

  • Resolved the issue of the managed roles created by the Bot

  • Added Validation condition on fetch collections to generate rules

  • Integrated an error exception on exceeding the server role limitation

  • Included a queue inside checking the holder verification process

  • Changed the UX of HVB by adding pagination & moved some components

Raffle Feature

  • Created a separate app/program for Raffle Event

  • Removed raffle code from project-staking

Staking Feature

  • Created a pNFT collection for testing

  • Setup the staking environment for testing

  • Researched and tested pNFT

Upcoming NFT collections:

https://moon.ly/nft/karamendos

https://moon.ly/nft/fishballerz

https://moon.ly/nft/nekkro

https://moon.ly/nft/brainslum

Minted projects worth mentioning:

https://moon.ly/nft/assetdash

https://moon.ly/nft/mad-lads

https://moon.ly/nft/transdimensional-fox-federation

https://moon.ly/nft/famous-fox-federation

The post Moonly weekly progress update #52 – New Moonly Collection appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/06/08/moonly-weekly-progress-update-52-new-moonly-collection/feed/ 0
Easiest way to create integration tests for GraphQL server with 80% code coverage in 30 minutes https://prodsens.live/2023/03/16/easiest-way-to-create-integration-tests-for-graphql-server-with-80-code-coverage-in-30-minutes/?utm_source=rss&utm_medium=rss&utm_campaign=easiest-way-to-create-integration-tests-for-graphql-server-with-80-code-coverage-in-30-minutes https://prodsens.live/2023/03/16/easiest-way-to-create-integration-tests-for-graphql-server-with-80-code-coverage-in-30-minutes/#respond Thu, 16 Mar 2023 17:02:10 +0000 https://prodsens.live/2023/03/16/easiest-way-to-create-integration-tests-for-graphql-server-with-80-code-coverage-in-30-minutes/ easiest-way-to-create-integration-tests-for-graphql-server-with-80%-code-coverage-in-30-minutes

Intro Integration testing is an essential part of any robust software development process, yet creating effective tests can…

The post Easiest way to create integration tests for GraphQL server with 80% code coverage in 30 minutes appeared first on ProdSens.live.

]]>
easiest-way-to-create-integration-tests-for-graphql-server-with-80%-code-coverage-in-30-minutes

Intro

Integration testing is an essential part of any robust software development process, yet creating effective tests can be a daunting and time-consuming task for developers.

For those using GraphQL servers, the challenge is even more pronounced, as writing comprehensive tests requires understanding complex queries, mutations, and schemas.

In this blog post, we’ll go through the easiest way to get started with automated tests for GraphQL server by using Pythagora – an open source tool that creates automated integration tests by analyzing server activity without you having to write a single line of code. We will explore how to get even 80% code coverage in just 30 minutes of playing around the server.

Pythagora tests running

Installation and setup

Getting started with Pythagora is a breeze, as it’s designed to be a user-friendly tool for developers of all experience levels. To install Pythagora, simply run the following command in your project directory:

npm install pythagora

Once installed, there are two essential commands you need to know. The first one is to start capturing server activity and tests and the second one is to run the created tests.

Creating tests

Pythgora works by capturing API requests which it uses to determine the start and the end of an integration test. So, to get Pythagora to create tests, you need to start your app wrapped in Pythagora by running the following command:

npx pythagora --init-command "my start command" --mode capture

The "my start command" is the command you’re using to run your app. For example:

  • npm run start

  • nest start (if you’re using a framework like NestJS)

  • node index.js

When you execute the capture command, Pythagora will start your GraphQL server and monitor its activity. Now, you just need to make API requests to your server. To do this, you should use your application as you normally would. You can click around the frontend of your app or you can do it through the GraphQL playground.

As requests are made to your server, Pythagora captures the queries, mutations, and responses, generating corresponding integration tests automatically. You can see the captured tests in the pythagora_tests folder in the root of your repository. Each file in this folder represents one endpoint (in the case of GraphQL, the majority of your tests will be in one file – eg. graphql.json that represents /graphql endpoint). In it, you will see an array where each item is one test and in it, you can see all the data that Pythagora has captured and that’s being used to run a test.

You can continue making requests as long as you want. We were able to get to a 90% code coverage within an hour or making requests but this will obviously depend on the complexity of your app.

Running tests

Once you’ve captured your integration tests using Pythagora’s capture mode, it’s time to run them. To do so, use the following command, again replacing "my start command" with the appropriate command for your server:

npx pythagora --init-command "my start command" --mode test

By running the command, Pythagora will execute the generated integration tests, providing you with a summary of the test results, including passed tests, failed tests, and code coverage.

Ok, once you have your tests captured, you can go on with your development until you encounter a test failing. Then, you will want to debug the code.

Debugging

To debug your code, the easiest way is to put breakpoints (or console.logs) around your app and rerun the failed tests. You have 2 different commands that you can use to rerun failed tests. If you want to rerun all tests, you can type:

npx pythagora --init-command "my start command" --rerun-all-failed

This command will start running all failed tests from the previous test run one by one. If you want to debug a specific test, you can type:

npx pythagora --init-command "my start command" --mode test --test 

This command will run a single test that you specify by the test id. You can find the test id in the logs when a test fails. Here is an example of a failed test log in which the test id is a value inside ReqId parameter.

Failed Pythagora test

Code coverage

Finally, to make sure you have enough of your codebase covered with tests, Pythagora integrated nyc code coverage report. By default, you will get a summary of the code that’s covered by tests each time you run the test command. It will look like this:

Pythagora code coverage summary

If you want to see all lines in your repository that are and aren’t covered by tests, you can add an argument --full-code-coverage-report to the test command. For example:

npx pythagora --init-command "my start command" --mode --full-code-coverage-report

This will generate a full nyc code coverage report which you can see by opening pythagora_tests/lcov-report/index.html. This should open a static website in your browser and show you your entire repository along with files and lines that are covered with tests.

Pythagora code coverage report

From here, you can run the Pythagora capture command again and create API requests that will trigger the uncovered lines until you get to 100% code coverage.

Conclusion

So, to summarize, in this post we went through the process of creating automated integration tests for a GraphQL server with Pythagora. This way, you can easily cover your entire backend with tests just by making API requests to your server while Pythagora is recording server activity.

Pythagora is open source so if you liked this post or Pythagora, it would mean a world to us if you star Pythagora Github repo here. To stay up-to-date on the latest features, enhancements, and news, you can add your email here. Your support will help us continue to develop and improve this powerful tool for the benefit of developers everywhere.

Thank you for reading and happy testing!

The post Easiest way to create integration tests for GraphQL server with 80% code coverage in 30 minutes appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/03/16/easiest-way-to-create-integration-tests-for-graphql-server-with-80-code-coverage-in-30-minutes/feed/ 0
Do Blog Posts Actually Lead to Purchases? [New Data] https://prodsens.live/2023/03/07/do-blog-posts-actually-lead-to-purchases-new-data/?utm_source=rss&utm_medium=rss&utm_campaign=do-blog-posts-actually-lead-to-purchases-new-data https://prodsens.live/2023/03/07/do-blog-posts-actually-lead-to-purchases-new-data/#respond Tue, 07 Mar 2023 12:02:34 +0000 https://prodsens.live/2023/03/07/do-blog-posts-actually-lead-to-purchases-new-data/ do-blog-posts-actually-lead-to-purchases?-[new-data]

Our 2023 Marketing Trends Report shows that 29% of marketers use a website or blog to attract and…

The post Do Blog Posts Actually Lead to Purchases? [New Data] appeared first on ProdSens.live.

]]>
do-blog-posts-actually-lead-to-purchases?-[new-data]

Our 2023 Marketing Trends Report shows that 29% of marketers use a website or blog to attract and convert leads.

The enduring importance placed on blogging isn’t shocking. Blogs are integral to most digital marketing plans because they can:

  • Boost SEO
  • Improve overall site traffic and a brand’s online presence
  • Help prospects learn more about your industry, brand, product, or service

However, starting and running an effective, traffic-generating blog requires much time and energy. Plus, if you’re a marketing manager on a tight budget, you may wonder, “Will blog posts actually lead to purchases?”

To answer the question, here’s some research we conducted to help you determine if a company blog suits your marketing strategy.

Important Blog Statistics Marketers Should Know

Do Blog Posts Lead to Purchases?

Why Blogs Lead to Purchases

How to Lead Readers to Purchases

Creating Your Blog Nurturing Process

Download Now: How to Start a Successful Blog [Free Guide]

Important Blog Statistics Marketers Should Know

As previously mentioned, blogs remain essential to many marketing strategies, and there are statistics that show why. Here are some blog statistics we gathered from our Marketing Trends Report:

  • One in three marketers are leveraging their blog or website as well as SEO to land on SERPs
  • Blogs, social media shopping tools, and influencer marketing are all tied for the highest ROI of any marketing channel

A chart graph showing blogging, social media shopping tools, and influencer marketing all tied for highest ROI

  • 33% of marketers are leveraging blog posts in their marketing strategy.
  • Blog posts, interviews, images, and podcasts will see high first-time use among marketers in 2023.

Do Blog Posts Lead to Purchases?

The growth of other content strategies, like video marketing, might make you think consumers will only buy products after seeing them on other platforms.

However, when we surveyed 300 consumers via Lucid and asked, “Have you ever purchased something from a company after reading a blog post?” a whopping 56% said, “Yes.”

percentage of people who've bought something after reading a blog is 56%

Furthermore, according to our trends report, blog posts are among the media formats with the highest ROI, along with videos, images, and podcasts.

Why Blogs Lead to Purchases

If your company has a blog that discusses your industry or how your offerings can help with the average reader’s everyday pain points, your audiences can discover and gain trust in your brand’s expertise. That trust and credibility could ultimately lead to purchases.

Why? Suppose a prospect trusts the advice or information given in your blog posts. In that case, they might trust that your offerings are better quality than your competitor’s because your brand knows the industry, what customers want, and the pain points your product or service solves.

Even if you prefer video, social media, or visual marketing strategies, it’s important to remember that blogs can help you sell products in ways other content types can’t.

Videos and images, for example, might only give prospects a glimpse of how a product or service works. However, blog posts can offer extensive information that would otherwise be cut from videos and images to avoid overwhelming viewers on social media.

Blog posts can also increase your search ranking and allow more opportunities to link directly to a landing or purchasing page. Consumers can find your content via search, then read your post, and easily click to a product purchasing page after your content persuades them to buy a product.

Additionally, because most blog sites allow you to embed videos, podcasts, and imagery, your company blog can also be a great place to promote your other marketing assets while still informing prospects about your brand.

How to Lead Blog Readers to Purchases

In the following year, consider a content marketing mix that includes blogging. How do you persuade shoppers with your blog posts? Here are a few quick tips.

Mentioning your brand, product, or service where it feels natural in your blog posts is an excellent way to generate leads and purchases.

However, you can also use hyperlinks to link to product pages or CTA buttons that draw slightly more attention to a product or offer without directing reader attention far away from the blog post.

At HubSpot, we usually place at least three CTAs related to a blog topic in each post:

  • A text-based CTA in the introduction
  • A larger banner image at the bottom
  • A slide-in CTA that shows up to the side of your text as you’re scrolling through the middle of the post.

This allows three mentions of a product or offering without interrupting the reader’s experience.

The image below is an example of our CTAs.

From top to bottom: A text-based CTA, a slide-in CTA, and a bottom CTA.From top to bottom: A text-based CTA, a slide-in CTA, and a bottom CTA. (Image source)

Offer a free resource:

You probably noticed the above CTA examples included links to free resources. Though it may sound counterproductive, free resources can help your blog lead consumers to make purchases. Here’s how:

If your brand sells a subscription, service, or product that’s pricier or needs your company’s leadership approval, blog readers might need more than a few blog posts to trust your brand and invest in your product. In that case, you should focus on lead nurturing rather than sending blog readers directly to a purchasing page.

HubSpot and many other blogs have grown their contact and qualified lead lists simply by creating a free downloadable resource, such as an ebook, template, or research report, and offering it through CTAs at the end of blog posts.

Below is an example of a recent free research report resource we offered at the end of one of our Sales Blog posts:

example of a blog post with a bottom CTA

To access free — but gated — offers, readers must give basic information about themselves and their company. From there, they receive an automated email or instant download of the resource while also becoming a contact — or lead — and enter our lead-qualification process to see if they could be an excellent prospect to reach out to.

The HubSpot Blog’s free resource strategy results in thousands of qualified leads per year that could convert into HubSpot customers. You can learn more about how to implement it on your blog here.

Remember, quality beats over-promotion.

In another recent Lucid survey, one-third of our general consumer respondents most commonly read blog posts to “learn something new.” Meanwhile, roughly 20% read blog posts for the sake of entertainment.

It’s important to remember readers will likely find your blog because they’re looking for information related to an industry they work in. They may also want to learn something related to their hobbies or need solutions for a pain point in their daily or professional lives.

Odds are, they’re not looking solely for promotional content. If a reader visits your blog site and finds nothing but blog posts filled with product shots and cheesy advertorial language, they’ll likely lose interest in your content and may not develop the sense of trust needed to make a purchase.

It’s wise to place a few un-intrusive CTAs in your blog posts and to mention your product or service when it feels relevant. Ensure that your content primarily offers valuable guidance, advice, and information to help your reader fulfill their needs.

For example, in this post about AI social media tools, we give valuable information about implementing AI-based technology in a social media process while listing HubSpot as one of the tools readers can use.

While we still mention our offerings, the post aims to show readers how multiple AI tools can streamline a marketing process.

Creating Your Blog Nurturing Process

Every brand has a target audience with different interests and content needs. While one blog strategy, such as free resources, will work well to generate qualified leads for a B2B company, other tactics, like simply linking to a product in blog posts, might be more lucrative for consumer-facing brands.

As you focus more on turning your blog traffic into revenue, keep these questions in mind:

  • What information is valuable to my audience?
  • Does my product require lead nurturing, such as gated offers?
  • Will the tactic I’m using seem over-promotional or disengaging to my audience?

Want to learn more about how the HubSpot blog generates leads? Check out this post from one of our content acquisition managers. Or, take down these tips on how to make money blogging.

Before you check out those pieces, download the free resource below.

 

The post Do Blog Posts Actually Lead to Purchases? [New Data] appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/03/07/do-blog-posts-actually-lead-to-purchases-new-data/feed/ 0
The Downsides of New Technology Come Fast and Furious (Let’s talk about AI) https://prodsens.live/2023/03/03/the-downsides-of-new-technology-come-fast-and-furious-lets-talk-about-ai/?utm_source=rss&utm_medium=rss&utm_campaign=the-downsides-of-new-technology-come-fast-and-furious-lets-talk-about-ai https://prodsens.live/2023/03/03/the-downsides-of-new-technology-come-fast-and-furious-lets-talk-about-ai/#respond Fri, 03 Mar 2023 20:04:22 +0000 https://prodsens.live/2023/03/03/the-downsides-of-new-technology-come-fast-and-furious-lets-talk-about-ai/ the-downsides-of-new-technology-come-fast-and-furious-(let’s-talk-about-ai)

This post is about the dangers of AI, and while there is plenty of “fear mongering” and celebration…

The post The Downsides of New Technology Come Fast and Furious (Let’s talk about AI) appeared first on ProdSens.live.

]]>
the-downsides-of-new-technology-come-fast-and-furious-(let’s-talk-about-ai)

This post is about the dangers of AI, and while there is plenty of “fear mongering” and celebration surrounding this subject, it’s important to gain more perspective from those integrating or planning to integrate these technologies. A lot of this type of dialog comes from journalists. As builders, we have a relevant perspective that needs to be checked in on.

We are at an inflection point in computing. Generative AI is not “all-powerful”, but it is a powerful force in the future of information and truth. These tools are both used and abused in modern society, and this is related to OpenAI, Microsoft AI, Meta’s LLaMa, and everything new in this space. With OpenAI’s primary API’s significant price reduction, tomorrow’s utilities will be more powerful than today’s.

While you shouldn’t feel ashamed of your excitement for this technology, it’s essential to note that the downsides don’t come tomorrow; they come today, often before the upsides arrive. Tools, infrastructure, and government policy are necessary to ensure the safe, fair, and equitable use of our most powerful technologies, but the best parts often don’t surface in time, or ever.

I don’t want to naively equate these technologies, but I think it’s worth mentioning for illustration: We are still waiting on the promise of crypto, and we have seen a remarkable amount of downside. Pump and dump schemes, crime circles, environmental devastation — downsides so far have been so much more tangible than the upsides.

I am excited about AI, but as fast as the upsides come, we risk the downsides hitting us sooner. Pick your poison: Disinformation, workforce displacement, fraud, etc. Let’s talk about it in terms of chaos.

Chaos

Creating chaos can be easier than preventing it because it typically requires fewer resources and less effort in the short term. Chaos can arise from misinformation, lack of communication, or insufficient planning, and these factors can be easier to cultivate than to address. Preventing chaos requires more resources, planning, and coordination. It involves identifying potential problems and taking proactive steps to mitigate them before they spiral out of control. This can be challenging because it often requires a deep understanding of the underlying issues and the ability to take decisive action to address them.

Moreover, chaos can have a snowball effect, where small issues escalate quickly into larger ones, making it increasingly difficult to control the situation. In contrast, preventing chaos requires a sustained effort over time, which can be challenging to maintain.

Overall, preventing chaos requires more proactive effort and resources in the short term, but it can help avoid much greater costs and negative consequences in the long term.

So what to do about all of this?

This post isn’t just a nebulous warning. Let’s talk about a couple of themes to move forward with.

Scrutiny

Let’s be excited about the innovations, but let’s also scrutinize these releases consistently. We will see uneven care and thoughtfulness in the rolling out of these technologies. We need to laud the good stuff and scrutinize the bad. For a tangible decision you can make to promote scrutiny, seek out the open source tool. This is often the best thing for your development in the long run, but all else being equal, it is good for security and safety.

Safety and security

The world wide web was not initially built to be secure, and the web security industry was virtually non-existent early on. Only after decades of scrutiny and focus did modern security normalize. Adoption of the web, and society’s willingness to place trust in it, was a slow enough process for security to catch up, perhaps still not enough. We have an OWASP top ten, and we have dedicated conferences.

Safety and security in the era of AI will take a different form, but it needs an industrial complex akin to other cybersecurity fields. Investment in mitigation in every type of corporation and government needs to be normalized and prioritized.

Safety and security outcomes in AI are going to be inherently less deterministic than what we are used to, and allaying chaos is an uphill climb, but it is a climb that has to start today.

Thanks for reading, happy coding!

The post The Downsides of New Technology Come Fast and Furious (Let’s talk about AI) appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/03/03/the-downsides-of-new-technology-come-fast-and-furious-lets-talk-about-ai/feed/ 0
DNS Server Configuration https://prodsens.live/2023/02/23/dns-server-configuration/?utm_source=rss&utm_medium=rss&utm_campaign=dns-server-configuration https://prodsens.live/2023/02/23/dns-server-configuration/#respond Thu, 23 Feb 2023 07:04:16 +0000 https://prodsens.live/2023/02/23/dns-server-configuration/ dns-server-configuration

Introduction A DNS service is a system that manages the mapping between domain names and IP addresses, making…

The post DNS Server Configuration appeared first on ProdSens.live.

]]>
dns-server-configuration

Introduction

A DNS service is a system that manages the mapping between domain names and IP addresses, making it possible for computers to locate and connect to websites and other online resources

DNS ports 👉 UDP/53 , TCP/53

Simple Lookup 👉 Domain name -> IP address
Reverse Lookup 👉 IP Address -> Domain name

Recursive DNS servers 👉 Look up information in the DNS hierarchy for clients

Authoritative DNS servers 👉 Store and provide information about specific domains

Domain Name Structure

✨ Hostname: A name given to a specific device or server on a network, used to identify and distinguish between different devices

✨ Domain name: A human-readable label used to identify a website or other online resource, consisting of one or more words separated by periods, with the rightmost label indicating the TLD

✨ Root domain: The highest level of the domain name system, represented by a period at the end of a domain name, and serving as the starting point for all domain name resolutions

Tree

DNS record types

✨ A Record: An A record maps a domain name to an IPv4 address

✨ AAAA Record: An AAAA record maps a domain name to an IPv6 address

✨ MX Record: An MX record specifies the mail server responsible for accepting email messages for a specific domain name

✨ CNAME Record: A CNAME record maps an alias or nickname for a domain name to the actual domain name

✨ NS Record: An NS record specifies the authoritative name servers for a domain

✨ TXT Record: A TXT record can contain any text data, and is often used for DNS-based authentication and anti-spam measures

✨ SRV Record: An SRV record specifies the location of a service provided by a domain, such as a SIP or XMPP service

✨ SOA Record: An SOA record specifies the authoritative name server for a domain, and contains information about how frequently the DNS information should be refreshed

💡 Fully Qualified Domain Name is a complete domain name that specifies the exact location of a resource within the Domain Name System (DNS) hierarchy

👉 For example, the FQDN for a website might be “www.example.com“, where “www” is the hostname, “example” is the second-level domain, and “com” is the top-level domain. The complete FQDN specifies the full path to the web server, and can be used by client devices to locate and connect to the server

DNS Workflow

Simple hands on

💡 I have used 2 Linux VMs in my VMWare workstation to configure 2 DNS servers (1 for backup) (network is 192.168.1.0/24)

We will start by installing the bind package on both systems

yum -y install bind-*

After installation, we will go to

vi /etc/named.conf

👉 This file is the main config file for the DNS server

Inside this file I cleared out the comments in this section for clarity

     12 options {
     13         listen-on port 53 { 127.0.0.1; };
     14         listen-on-v6 port 53 { ::1; };
     15         directory       "https://dev.to/var/named";
     16         dump-file       "https://dev.to/var/named/data/cache_dump.db";
     17         statistics-file "https://dev.to/var/named/data/named_stats.txt";
     18         memstatistics-file "https://dev.to/var/named/data/named_mem_stats.txt";
     19         recursing-file  "https://dev.to/var/named/data/named.recursing";
     20         secroots-file   "https://dev.to/var/named/data/named.secroots";
     21         allow-query     { localhost; };
     22         recursion yes;
     23         dnssec-enable yes;
     24         dnssec-validation yes;
     25         bindkeys-file "https://dev.to/etc/named.root.key";
     26         managed-keys-directory "https://dev.to/var/named/dynamic";
     27         pid-file "https://dev.to/run/named/named.pid";
     28         session-keyfile "https://dev.to/run/named/session.key";
     29 };

Inside this file, we will change the 127.0.0.1 into

listen-on port 53 { any; };

👉 We need all of the users to have access to the DNS server

Also need to change this line to accept any

allow-query     { any; };

Finally changing recursion to no

recursion no;

👉 Setting recursion to no will make the recursive query to not go to the cache server. We turn this feature off for security reasons.

💡 In simpler words, we are making the DNS server to not respond to any query that DNS doesn’t know instead of searching for it

This triggers another issue. If we are using a DNS Local server in our network, it won’t search for any other DNS name from other DNS servers. This will make the users that are using the local DNS server not able to access any other website outside of the local DNS address pool.

To solve the above issue we use an ACL rule

     12 acl AllowRecursion {
     13         127.0.0.1;
     14         ;
     15 };

Also adding

     27         recursion yes;
     28         allow-recursion { AllowRecursion; };

Looking at the bottom of the config file

include "https://dev.to/etc/named.rfc1912.zones";

👉 This is where we include our Domain Names that we bought

Also,

zone "." IN {
        type hint;
        file "named.ca";
};

👉 This declares that if the requested address is not found, DNS should look for the requested query from the named.ca file. The named.ca file contains the DNS server addresses from all over the world

Now we need to include the domain name in the zones file

vi /etc/named.rfc1912.zones

# Adding the following at the end
zone "waji.com" IN {
        type master;
        file "waji.zone";
        allow-update { 192.168.1.129; };
        allow-transfer { 192.168.1.129; };
};

zone "1.168.192.in-addr.arpa" IN {
        type master;
        file "waji.rev";
        allow-update { 192.168.1.129; };
        allow-transfer { 192.168.1.129; };
};

👉 The 192.168.1.129 is the IP address for the backup DNS server

We have just done the DNS settings for forward and reverse lookup

💡 We have master slave and hint types for DNS entries. master for the main DNS server, slave for the backup DNS entry and hint for the cache server

💡 waji.zone and waji.rev are the zone files. These files should have the records (A records etc)

if we are not using DDNS, we won’t need allow-update line

I am not using DDNS so the above becomes,

zone "waji.com" IN {
        type master;
        file "waji.zone";
        allow-transfer { 192.168.1.129; };
};

zone "1.168.192.in-addr.arpa" IN {
        type master;
        file "waji.rev";
        allow-transfer { 192.168.1.129; };
};

💡 Never use any in allow-transfer. We are allowing transferring the zone file that contains record names. If this file gets out to any anonymous person, he/she can exploit that file details to harm us

👉 We only have to allow the Slave DNS server. In my case 192.168.1.129 is our slave DNS server

Now, we need to go to

cd /var/named

Here, we need to create the files

cp named.localhost waji.zone
cp named.localhost waji.rev

In the waji.zone file

$TTL 1D
@       IN SOA  ns1.waji.com.         root(
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN      NS      ns1.waji.com.
ns1     IN      A       192.168.1.128
www     IN      A       192.168.1.128
aaa     IN      CNAME   www.waji.com.

👉 Here we have set it so that if the client searches for ns1.waji.com, the server will reply to 192.168.1.128

@ -> is translated to the domain name (origin)

Now, in the waji.rev file

$TTL 1D
@       IN SOA  ns1.waji.com.    root(
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN      NS      ns1.waji.com.
128     IN      PTR     ns1.waji.com.
128     IN      PTR     www.waji.com.

We need to change the ownership for the files and directories under /var/named to named for the DNS to actually work

# Current owner
-rw-r----- 1 root  root   198  2월 23 13:28 waji.rev
-rw-r----- 1 root  root   222  2월 23 13:09 waji.zone

# Changing the group ownership
chown .named ./waji.*

-rw-r----- 1 root  named  198  2월 23 13:28 waji.rev
-rw-r----- 1 root  named  222  2월 23 13:09 waji.zone

Now, we just need to enable and start the service

systemctl start named
systemctl enable named

Finally, adding the service to the firewall

firewall-cmd --permanent --add-service=dns
firewall-cmd --reload

We successfully configured the master DNS server.

👉 From the unconfigured slave DNS server we will test if the master was configured correctly

From the slave DNS server,

nslookup
> server 192.168.1.128   
Default server: 192.168.1.128
Address: 192.168.1.128#53

> www.waji.com
Server:     192.168.1.128
Address:    192.168.1.128#53

Name:   www.waji.com
Address: 192.168.1.128

We can test the other domain name

> ns1.waji.com
Server:     192.168.1.128
Address:    192.168.1.128#53

Name:   ns1.waji.com
Address: 192.168.1.128

Or we can reverse lookup

> 192.168.1.128
128.1.168.192.in-addr.arpa  name = ns1.waji.com.
128.1.168.192.in-addr.arpa  name = www.waji.com.

Now to configure the slave DNS server,

vi /etc/named.conf

listen-on port 53 { any; };
allow-query     { any; };

Configuring inside the /etc/named.rfc1912.zones file

zone "waji.com" IN {
        type slave;
        file "slaves/waji.zone.slave";
        notify yes;
        masterfile-format text;
        masters { 192.168.1.128; };
};

zone "1.168.192.in-addr.arpa" IN {
        type slave;
        file "slaves/waji.rev.slave";
        notify yes;
        masterfile-format text;
        masters { 192.168.1.128; };
};

👉 We don’t need the masterfile-format text configuration in actual setup as it is vulnerable to attacks. The files should be saved in ‘binary’ which is the default. I am just using this setting to actually check the file contents for this hands on.

Going back to the Master DNS server and adding few lines

zone "waji.com" IN {
        type master;
        file "waji.zone";
        allow-transfer { 192.168.1.129; };
        also-notify { 192.168.1.129; };
};

zone "1.168.192.in-addr.arpa" IN {
        type master;
        file "waji.rev";
        allow-transfer { 192.168.1.129; };
        also-notify { 192.168.1.129; };
};

👉 We needed to allow the notify capability from the master as we configured notify yes; in our slave DNS server

Lastly, still from the Master DNS,

vi /etc/named.rfc1912.zones

$TTL 1D
@       IN SOA  ns1.waji.com.   root(
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN      NS      ns1.waji.com.
        IN      NS      ns2.waji.com.
ns1     IN      A       192.168.1.128
ns2     IN      A       192.168.1.129
www     IN      A       192.168.1.128
aaa     IN      CNAME   www.waji.com.

And the for the reverse lookup

vi /var/named/waji.rev

$TTL 1D
@       IN SOA  ns1.waji.com.    root(
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN      NS      ns1.waji.com.
        IN      NS      ns2.waji.com.
128     IN      PTR     ns1.waji.com.
129     IN      PTR     ns2.waji.com.
128     IN      PTR     www.waji.com.

Restarting the service

systemctl restart named

👉 Back to the slave DNS server,

systemctl start named
systemctl enable named

If we navigate to /var/named/slaves

ls -l /var/named/slaves/
total 8
-rw-r--r-- 1 named named 408 Feb 23 14:55 waji.rev.slave
-rw-r--r-- 1 named named 380 Feb 23 14:55 waji.zone.slave

✔ We can confirm that the DNS information is present in the slave DNS server

✨ We looked at what DNS server is and how it works by doing a short hands on using Virtual Machines. We also linked the Master-slave DNS servers that provides redundancy and fault tolerance

The post DNS Server Configuration appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/02/23/dns-server-configuration/feed/ 0