Preeti Gupta, Author at ProdSens.live https://prodsens.live/author/preeti-gupta/ News for Project Managers - PMI Wed, 19 Jun 2024 07:20:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Preeti Gupta, Author at ProdSens.live https://prodsens.live/author/preeti-gupta/ 32 32 Is it good Idea to take 3 months Pause to crack top product based companies ? https://prodsens.live/2024/06/19/is-it-good-idea-to-take-3-months-pause-to-crack-top-product-based-companies/?utm_source=rss&utm_medium=rss&utm_campaign=is-it-good-idea-to-take-3-months-pause-to-crack-top-product-based-companies https://prodsens.live/2024/06/19/is-it-good-idea-to-take-3-months-pause-to-crack-top-product-based-companies/#respond Wed, 19 Jun 2024 07:20:20 +0000 https://prodsens.live/2024/06/19/is-it-good-idea-to-take-3-months-pause-to-crack-top-product-based-companies/ is-it-good-idea-to-take-3-months-pause-to-crack-top-product-based-companies-?

Currently I am working at service based IT company whose salary is very low around 20% of what…

The post Is it good Idea to take 3 months Pause to crack top product based companies ? appeared first on ProdSens.live.

]]>
is-it-good-idea-to-take-3-months-pause-to-crack-top-product-based-companies-?

Currently I am working at service based IT company whose salary is very low around 20% of what product based companies are paying.
It’s been 2.5 years, I want to switch so I want to prepare parallely but i tried for 1 month and it’s difficult to prepare in that way.

So it is good idea to resign and prepare for top companies for 3 months or not ?
Is anyone do the same, please share your experience?

The post Is it good Idea to take 3 months Pause to crack top product based companies ? appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/19/is-it-good-idea-to-take-3-months-pause-to-crack-top-product-based-companies/feed/ 0
The difficulty of SQL stems from relational algebra https://prodsens.live/2024/03/14/the-difficulty-of-sql-stems-from-relational-algebra/?utm_source=rss&utm_medium=rss&utm_campaign=the-difficulty-of-sql-stems-from-relational-algebra https://prodsens.live/2024/03/14/the-difficulty-of-sql-stems-from-relational-algebra/#respond Thu, 14 Mar 2024 06:20:49 +0000 https://prodsens.live/2024/03/14/the-difficulty-of-sql-stems-from-relational-algebra/ the-difficulty-of-sql-stems-from-relational-algebra

In the field of structured data computing, SQL is still the most widely used working language, not only…

The post The difficulty of SQL stems from relational algebra appeared first on ProdSens.live.

]]>
the-difficulty-of-sql-stems-from-relational-algebra

In the field of structured data computing, SQL is still the most widely used working language, not only adopted by all relational databases, but also targeted by many new big data platforms.

For a certain computing technology, people usually care about two efficiencies. One is the descriptive efficiency of operations, and the other is the execution efficiency of operations. This is easy to understand. If the descriptive efficiency is too low, it means that the development cost is too high and it is difficult to write programs for calculation; If the execution efficiency is low, it takes a long time to obtain results, which greatly reduces its practical value. Simply put, it means writing simple and running fast.

However, we have previously analyzed some computational scenarios for structured data, and SQL actually does not perform well in both simple writing and fast running. When the situation is slightly more complex, it is difficult to handle, often resulting in thousands of lines of nested N layers of code and tens of gigabytes of computation requiring several hours to run. Two typical examples are calculating the number of days a stock has been continuously rising and the TopN for the entire set and within groups of a large set. The details will not be repeated here, and those interested can refer to previous posts.

How many days does a stock continuously rise for the longest time? Originally a simple task, due to the lack of ordered computing ability, it needs to be written in a multi-layered nested form, which is difficult to write and understand:

select max(ContinuousDays) from (
    select count(*) ContinuousDays from (
        select sum(UpDownTag) over (order by TradeDate) NoRisingDays from (
            select TradeDate,case when Price>lag(price) over ( order by TradeDate)then 0 else 1 end UpDownTag from Stock ))
    group by NoRisingDays )

To get the top 10 out of 100 million pieces of data, and to get the top 10 of each group after grouping, there is a significant difference in writing, and they all have ORDER BY in the statement, which means that high complexity large sorting has to be performed, and can only rely on the database optimizer to find low complexity algorithms.

SELECT TOP 10 * FROM Orders ORDER BY Amount DESC
SELECT * FROM (
        SELECT *, ROW_NUMBER() OVER (PARTITION BY Area ORDER BY Amount DESC) rn FROM Orders )
WHERE rn<=10

We have also analyzed the reasons. SQL lacks discreteness, resulting in incomplete set orientation, difficult ordered calculations, and difficulty in describing high-performance algorithms….

However, there is a deeper reason behind this, as the fundamental difficulty of SQL actually stems from its theoretical foundation, namely relational algebra.

To explain this view, we need to analyze what using programs to implement calculations is actually doing.

Essentially, the process of writing a program is the process of translating problem-solving ideas to a precise formal language that can be executed by a computer. For example, just like elementary school students solving practical problems, after analyzing the problem and coming up with a solution, they also need to list four arithmetic expressions. The same goes for calculating with a program. Not only do we need to come up with a solution to the problem, but we also need to translate the solution to actions that the computer can understand and execute in order to implement the calculation.

For the formal language used to describe computational methods, its core lies in the algebraic system used. We cannot strictly define the concept of algebraic system here, we will only explain it in layman’s terms. To solve a certain computational problem, people define some data types and a set of operational rules for these data types, ensuring the closeness and consistency of these operations, which can be called an algebraic system. For example, the familiar rational numbers and their four arithmetic operations are an algebraic system used to solve the numerical calculation needs in daily life. Closeness refers to the requirement that the calculation result must still be a defined data object, such as the result of the four arithmetic operations on rational numbers being still a rational number. And consistency refers to the fact that these operations cannot produce contradictory results. For example, we need to agree that they cannot be divided by 0, otherwise dividing a certain number by 0 into any number will lead to logical contradictions.

If the design of this algebraic system is not carefully considered, and the provided data types and operations are inconvenient, it will make it very difficult to describe the algorithm. At this point, a strange phenomenon occurs: the difficulty of translating the solution to the code far exceeds that of solving the problem itself.

For example, we have learned to use Arabic numerals for daily calculations since childhood, and implementing addition, subtraction, multiplication, and division is very convenient. Everyone naturally believes that numerical operations should be like this. In fact, not necessarily! I think most people know about something called Roman numerals. I’m not sure if the Roman numeral system also has familiar addition, subtraction, multiplication, and division operations (which cannot be easily implemented like Arabic numerals, and the definition of operations may be different). I’ve also been confused about how ancient Romans went shopping?

Then, we know that whether a program can be written simply is actually an algebraic problem behind programming languages.

And as we have mentioned before, running fast is essentially the same thing as writing simple, that is, it can make high-performance algorithms easy to write. In this way, running fast is still an algebraic problem.

Let’s make another analogy:

Most students who have attended elementary school in China know the story of Gauss calculating 1+2+3+…+100. Ordinary people just add 100 times step by step. The Gauss child is very smart and has found that 1+100=101, 2+99=101, …, 50+51=101, and the result is 50 multiplied by 101, he quickly finished calculating and went home for lunch.

After hearing this story, we all feel that Gauss is very clever and can come up with such a clever method, which is simple and fast. This is not wrong, but it is easy to overlook one point: in the era of Gauss, multiplication already existed in the human arithmetic system (also an algebra)! As mentioned earlier, when we learn the four arithmetic operations from a young age, we may take multiplication for granted, but in fact, multiplication was invented later than addition. Although multiplication is defined by addition, its characteristics can be utilized to calculate using a 9x9 table, which greatly improves efficiency. If there were no multiplication in Gauss’s era, even with clever Gauss, it is impossible to quickly solve this problem.

The mathematical foundation of SQL is relational algebra, which is an algebraic system used to perform batch structured data calculations. This is also why databases that use SQL are also called relational databases.

Relational algebra has been invented for fifty years, and the application requirements and hardware environment fifty years ago are vastly different from today. Due to the large number of existing users and the lack of mature new technologies, SQL based on relational algebra remains the most important database development language today. Although there have been some improvements in the past few decades, the foundation has not changed. Faced with contemporary complex requirements and hardware environments, relational databases are not as adept.

Relational algebra is too simple and lacks sufficient data types and operations. Therefore, when using SQL to describe the solution of a problem, it is necessary to find convoluted ways to implement. For example, the problem of stock price increases, because relational algebra applies the theory of unordered sets in mathematics and does not create the concept of order for SQL, turns a simple problem into a difficult problem, even if it takes a detour, it is difficult to write. This leads to the phenomenon that the difficulty of solving translation problems is greater than that of solving the problem itself, as mentioned earlier. The top 10 problem is also the same case, the aggregation operation designed for relational algebra does not include TOPN and it does not have a set data type. Therefore, this operation cannot be designed as an aggregation operation, and can only be described as a large sort.

Relational algebra is like an arithmetic system that only involves addition but has not yet invented multiplication, and many things are inevitable if not done well.

When the calculation is simple or the performance requirements are not high, using SQL is still relatively convenient, after all, there are many masters and related software is also very rich. But the data requirements in modern applications are becoming increasingly complex, and the amount of data is also increasing. Continuing to use SQL will seriously affect work efficiency. Moreover, unfortunately, this problem is theoretical and no matter how optimization in engineering is done, it is of no use and can only be improved to a limited extent, not eradicated. However, the vast majority of database developers would not have thought of this layer, or rather, to cater to the compatibility of existing users, they did not intend to think of this layer. So, the mainstream database industry has been circling in this circle.

Then what should we do? How to make calculations writing simpler and running faster?

Invent new algebra! Algebra with ‘multiplication’. This is the difference of esProc SPL. We gave the algebraic foundation of SPL a mathematical name: discrete datasets. SPL is the formal language of this algebra. The advantages of SPL have been discussed multiple times in the previous articles. With the support of new algebra, we are truly able to write simple yet run fast.

The post The difficulty of SQL stems from relational algebra appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/03/14/the-difficulty-of-sql-stems-from-relational-algebra/feed/ 0
Caption This! 🤔💭 https://prodsens.live/2024/02/18/caption-this-%f0%9f%a4%94%f0%9f%92%ad-8/?utm_source=rss&utm_medium=rss&utm_campaign=caption-this-%25f0%259f%25a4%2594%25f0%259f%2592%25ad-8 https://prodsens.live/2024/02/18/caption-this-%f0%9f%a4%94%f0%9f%92%ad-8/#respond Sun, 18 Feb 2024 00:20:42 +0000 https://prodsens.live/2024/02/18/caption-this-%f0%9f%a4%94%f0%9f%92%ad-8/ caption-this!-

Think you’ve got what it takes to whip up the cleverest caption for this image? Give it a…

The post Caption This! 🤔💭 appeared first on ProdSens.live.

]]>
caption-this!-

Think you’ve got what it takes to whip up the cleverest caption for this image? Give it a shot!

Canvas with a plaster nose on it on a shelf

Follow the DEVteam for more online camaraderie!

The post Caption This! 🤔💭 appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/02/18/caption-this-%f0%9f%a4%94%f0%9f%92%ad-8/feed/ 0
Top 15 Tips to Make Your Webpage Load Faster https://prodsens.live/2024/02/06/top-15-tips-to-make-your-webpage-load-faster/?utm_source=rss&utm_medium=rss&utm_campaign=top-15-tips-to-make-your-webpage-load-faster https://prodsens.live/2024/02/06/top-15-tips-to-make-your-webpage-load-faster/#respond Tue, 06 Feb 2024 14:20:22 +0000 https://prodsens.live/2024/02/06/top-15-tips-to-make-your-webpage-load-faster/ top-15-tips-to-make-your-webpage-load-faster

**Top 15 Tips to Make Your Webpage Load Faster ** In the competitive online landscape, every second counts…

The post Top 15 Tips to Make Your Webpage Load Faster appeared first on ProdSens.live.

]]>
top-15-tips-to-make-your-webpage-load-faster

**Top 15 Tips to Make Your Webpage Load Faster

  • **

In the competitive online landscape, every second counts when it comes to webpage loading times. Users expect websites to load quickly, and any delay can lead to frustration and abandonment. Optimizing your webpage for speed not only enhances user experience but also contributes to improved search engine rankings and higher conversion rates. Here are 15 effective tips to help you make your webpage load faster:

  1. Optimize Images: Large images are one of the main culprits for slow-loading webpages. Compress and resize images without compromising quality to reduce file size and improve loading times.

  2. Minify CSS, JavaScript, and HTML: Eliminate unnecessary characters, such as comments, whitespace, and line breaks, from your code to reduce file sizes. Tools like minifiers can automate this process.

  3. Enable Browser Caching: Leverage browser caching by setting appropriate HTTP headers to allow browsers to store static resources locally. This reduces the need for repeated downloads on subsequent visits.

  4. Use a Content Delivery Network (CDN): Distribute your website’s content across multiple servers located in different geographical regions. CDN servers deliver content to users from the nearest location, reducing latency and speeding up page loading times.

  5. Reduce Server Response Time: Optimize server performance by upgrading hardware, using caching mechanisms, and minimizing the execution time of server-side code and database queries.

  6. Limit HTTP Requests: Reduce the number of HTTP requests required to render a webpage by combining CSS and JavaScript files, and using CSS sprites to combine multiple images into a single file.

  7. Implement Asynchronous Loading: Load non-essential resources, such as scripts and stylesheets, asynchronously to prevent them from blocking the rendering of the page. This allows critical content to be displayed faster.

  8. Utilize Lazy Loading: Delay the loading of images, videos, and other media until they are needed, such as when they come into view as the user scrolls down the page. Lazy loading conserves bandwidth and accelerates initial page rendering.

  9. Minimize Redirects: Each redirect adds additional HTTP requests and increases loading times. Minimize the use of redirects by updating internal links and ensuring that external links point directly to the intended destination.

  10. Optimize CSS Delivery: Load critical CSS inline and defer the loading of non-critical CSS to prioritize the rendering of above-the-fold content. This prevents render-blocking CSS from delaying page display.

  11. Opt for Faster Hosting: Choose a reliable web hosting provider with fast servers and sufficient resources to handle your website’s traffic. Consider upgrading to a dedicated or cloud hosting solution for improved performance.

  12. Enable Gzip Compression: Enable Gzip compression on your web server to reduce the size of text-based resources, such as HTML, CSS, and JavaScript files, before transmitting them over the network.

  13. Optimize Fonts: Minimize the number of fonts and font variations used on your webpage to reduce the number of HTTP requests and decrease loading times. Consider using system fonts or hosting fonts locally to improve performance.

  14. Prioritize Above-the-Fold Content: Load critical content, such as text and images visible without scrolling (above-the-fold), first to provide users with a fast initial impression of your webpage. Lazy load additional content as needed.

  15. Regularly Monitor and Test Performance: Use tools like Google PageSpeed Insights, GTmetrix, and WebPageTest to analyze your webpage’s performance metrics and identify areas for improvement. Continuously monitor loading times and make adjustments as needed to maintain optimal performance.

By implementing these 15 tips, you can significantly improve the loading speed of your webpage, providing users with a seamless and enjoyable browsing experience while also enhancing your website’s overall performance and competitiveness in the digital landscape.

The post Top 15 Tips to Make Your Webpage Load Faster appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/02/06/top-15-tips-to-make-your-webpage-load-faster/feed/ 0
Top 7 Featured DEV Posts of the Week https://prodsens.live/2023/11/06/top-7-featured-dev-posts-of-the-week-2/?utm_source=rss&utm_medium=rss&utm_campaign=top-7-featured-dev-posts-of-the-week-2 https://prodsens.live/2023/11/06/top-7-featured-dev-posts-of-the-week-2/#respond Mon, 06 Nov 2023 22:24:03 +0000 https://prodsens.live/2023/11/06/top-7-featured-dev-posts-of-the-week-2/ top-7-featured-dev-posts-of-the-week

Let’s kickstart the week with a look at last week’s top articles. These insightful reads cover a wide…

The post Top 7 Featured DEV Posts of the Week appeared first on ProdSens.live.

]]>
top-7-featured-dev-posts-of-the-week

Let’s kickstart the week with a look at last week’s top articles. These insightful reads cover a wide range of topics, including emotions in web development, the mysteries of “node_modules,” the importance of unit testing, winning hackathons, data ownership concerns, building a neural network using only CSS, and even a tutorial on creating a simple screen recorder in just 20 lines of JavaScript.

But wait, there’s more – we’re absolutely delighted to welcome FOUR brand new folks to our Top 7 author lineup. So get comfy and dive into these fantastic reads, all courtesy of the vibrant dev.to community!

Discover the intersection of emotions and web development with @ingosteinke. In a world concerned about AI’s impact on creativity and coding jobs, explore how harnessing feelings, from rage-driven to happiness-inspired development, can empower us in this compelling exploration of productivity and mental health in tech.

In the world of web development and JavaScript, there’s a perennial mystery: why do those “node_modules” seem to weigh more than the entire universe? 🌌 Dive into the intricacies of this common conundrum with @faizbshah, and discover why JavaScript dependencies tend to be bulkier than their counterparts in other languages like Go and Rust. Unpack the reasons behind this phenomenon, shedding light on the unique challenges of the JS ecosystem. 🚀💡

Dive into the world of unit testing with @rahulladumor, where the stakes are as high as a caffeine-fueled all-nighter ☕ or as daunting as a null pointer exception. This article unveils the importance of unit tests as your code’s first line of defense, offering immediate feedback, enhanced code confidence, and simplified debugging. Learn why, especially in complex tech environments like AWS and serverless, unit tests are not optional but a must for robust and maintainable code.

Get ready to supercharge your hackathon game with @code42cate as your guide. In this article, they unveil the secrets to winning hackathons, from turbocharging your feedback loop for lightning-fast development to leveraging an open-source starter template that streamlines your projects. These strategies and tools will have you building, iterating, and impressing the judges like a pro, regardless of your hacking experience.💰

@blackgirlbytes delves into the essential concept of data ownership on the web, shedding light on a reality where users possess their data, but don’t truly own it. Drawing from personal experiences and reflections on technology philosophy, this article unveils the importance of data sovereignty and introduces Web5 as a platform enabling users to control their data and identity. Explore how Web5’s decentralized approach empowers users and fosters a more equitable online ecosystem.

Ever imagine you’d build a neural network using only CSS? As @grahamthedev notes it’s probably not a great idea, but it is a pretty fun challenge. Follow along to learn a couple of interesting CSS tricks that Graham used to get this neural network working!

If you’re fed up with the state of screen recorders, @ninofiliu has the solution for you. With just a few lines of JavaScript, you can create your own simple, but effective screen recorder.

And that’s a wrap on our Top 7 picks for this week! But the excitement doesn’t fade here. If you’re yearning for more riveting content and lively discussions, stay connected with the dev.to community. Your time to shine might be just around the corner!

Wondering where you can catch more of these fantastic articles, discussions, and updates? Well, it’s your lucky day because this week’s top articles will also be featured in our weekly DEV Community newsletter, landing in your inbox every Tuesday. Don’t miss out on the best discussions – make sure you’re opted in! 🚀📩

The post Top 7 Featured DEV Posts of the Week appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/06/top-7-featured-dev-posts-of-the-week-2/feed/ 0
Linux servers – essential security tips https://prodsens.live/2023/10/06/linux-servers-essential-security-tips/?utm_source=rss&utm_medium=rss&utm_campaign=linux-servers-essential-security-tips https://prodsens.live/2023/10/06/linux-servers-essential-security-tips/#respond Fri, 06 Oct 2023 07:24:28 +0000 https://prodsens.live/2023/10/06/linux-servers-essential-security-tips/ linux-servers-–-essential-security-tips

Web developers, generally hate messing with sysadmin type of tasks, however you will at some point in your…

The post Linux servers – essential security tips appeared first on ProdSens.live.

]]>
linux-servers-–-essential-security-tips

Web developers, generally hate messing with sysadmin type of tasks, however you will at some point in your day job or personal projects need to spin up a server instance.

In this guide I will cover some of the basic security essentials you need to ensure your server is relatively secure.

Add an SSH only user with sudo access:

Note: This is a verbose approach just to illustrate all the steps needed.

NEW_SSH_USER=developer
sudo useradd -m -s /bin/bash $NEW_SSH_USER
usermod -aG sudo $NEW_SSH_USER
mkdir -p /home/$NEW_SSH_USER/.ssh
touch /home/$NEW_SSH_USER/.ssh/authorized_keys

# Next copy your pub key to authorised keys
 nano /home/$NEW_SSH_USER/.ssh/authorized_keys

# Next fix permissions
chown -R developer:developer /home/$NEW_SSH_USER
chmod 600 /home/$NEW_SSH_USER/.ssh/authorized_keys
chmod 700 /home/$NEW_SSH_USER/.ssh

Change the default SSH port

This is not really going to hide your SSH port. Since a port lookup can reveal which port you are using for SSH, however nonetheless – it’s a good practice to change the default SSH port to at least add some protection against bots.

nano /etc/ssh/sshd_config
# Change Port 22 => Port xyz
sudo service ssh restart

Please test that you can now SSH in with the new username and port before moving on to the next step.

Disable root and password access

nano /etc/sshd_config
# Change PermitRootLogin yes => PermitRootLogin no
# Change PasswordAuthentication yes => PasswordAuthentication no 

# Allow only our newly created user account access
# Add/Change AllowUsers => AllowUsers developer

sudo service ssh restart

Next, install fail2ban – which will monitor SSH connections and block abuse attempts:

sudo apt-get install fail2ban -y
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo sed -i 's/# ignoreip = 127.0.0.1/ignoreip = 127.0.0.1/; s/# bantime = 10m/bantime = 1h/; s/# findtime = 10m/findtime = 10m/; s/# maxretry = 5/maxretry = 3/' /etc/fail2ban/jail.local

Firewall

On Ubuntu servers: UFW generally comes preinstalled, if not just run “apt install ufw”.

Opening ports:

# Everyone
sudo ufw allow 24/tcp

# just your IP
sudo ufw allow from 192.168.2.2 to any port 24

Blocking ports:

# Everyone
sudo ufw deny 24/tcp

# Specific IP
sudo ufw deny from 192.168.1.100 to any port 24

General advice

  1. Lock down your servers to a specific IP. You can either use a VPN or some zero trust service.
  2. Monitor your: /var/log/syslog from time to time. The firewall and fail2ban will log here – it could be that a particular network or region that’s trying to attack your server. You can then block them.
  3. Use a network firewall in front. Most hosting companies will provide you with some sort of “cloud firewall”. Setting this up will not only secure your server but also limit the amount of traffic that gets to your box.
  4. Setup a jumpbox – if you have multiple web servers, db servers and so forth. I strongly advise setting up a VPC or closed network where only the jumpbox has access to these servers. So you cannot directly SSH into them from outside. You can also setup a script to shutdown the Jumpbox at night or something similar. This does introduce a single point of failure, however if you secure it well enough and and use a floating or fixed IP – this should work fine.

Conclusion

This is just a basic rundown to get you started. It is by no means an exhaustible list but hopefully a good start.

If you find server management painful and would prefer an automated tool – please checkout my project: Scriptables.

Scriptables is simply an orchestration tool that takes away the pain of setting up and managing servers.

The post Linux servers – essential security tips appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/10/06/linux-servers-essential-security-tips/feed/ 0
Testμ 2023: Highlights From Day 2 https://prodsens.live/2023/09/14/test%ce%bc-2023-highlights-from-day-2/?utm_source=rss&utm_medium=rss&utm_campaign=test%25ce%25bc-2023-highlights-from-day-2 https://prodsens.live/2023/09/14/test%ce%bc-2023-highlights-from-day-2/#respond Thu, 14 Sep 2023 10:24:14 +0000 https://prodsens.live/2023/09/14/test%ce%bc-2023-highlights-from-day-2/ testμ-2023:-highlights-from-day-2

At the end of Day 2 of the TestMu Conference 2023, we reflect on a day filled with…

The post Testμ 2023: Highlights From Day 2 appeared first on ProdSens.live.

]]>
testμ-2023:-highlights-from-day-2

At the end of Day 2 of the TestMu Conference 2023, we reflect on a day filled with learning and inspiration. From the first session to the final chat exchange, we experienced innovation and togetherness. Throughout the day, we explored various aspects of testing, including AI’s impact and automation’s development, led by industry experts. These discussions have left a lasting impact, inspiring us beyond our screens.

We’re ready to carry its lessons and inspiration as we conclude this exceptional day. The connections we’ve made and the insights we’ve gained will shape the path ahead, allowing us to shape the future of testing.

Let’s see some highlights from Day 2 of the TestMu Conference 2023.

Welcome Note: Day 2 by Manoj Kumar

Manoj Kumar greeted all the speakers and attendees with a warm welcome, infusing everyone with excitement as we were on the brink of commencing Day 2 of TestMu 2023.

As we delve into Day 2, everyone is encouraged to prepare for an even more captivating array of talks, sessions, and discussions. The schedule features various intriguing topics and esteemed speakers, including Maaret Pyhäjärvi, Paul Grizzaffi, Andrew Knight, Mathias Bynens, Christian Bromann, and our keynote speaker, Mahesh Venkataraman. The anticipation is palpable for the exclusive preview of the “Future of Quality Assurance” survey, reflecting the collective insights of our community and underscoring the significance of collaboration and shared viewpoints.

Regardless of whether attendees are seasoned testing professionals or embarking on their journey, the event promised something for everyone. The expectation was for further enlightening talks that delve deeper into the testing domain, panel discussions that explore diverse perspectives, and sessions meticulously designed to provide tangible takeaways, enhancing the attendees’ testing practices.

The contests, certification marathons, and #LambdaTestYourApps challenges were compelling opportunities to showcase your abilities and gain prominence in this dynamic community.

Expanding the Horizon of Innovation in Testing by Mahesh Venkataraman

Over the last twenty years, there has been a significant transformation in testing processes and technology. However, the concept of innovative testing has revolved chiefly around test automation. Recently, AI-driven testing has gained popularity. But should innovation in testing be limited to only automation or AI-driven methods? Sometimes, the most valuable ideas come from outside the industry. Can we find inspiration from how other fields have transformed their products and practices? Can we extract and apply their core principles to testing?

This session by Mahesh encouraged contemplation by delving into how we can reshape, rethink, and reposition testing for the benefit of all stakeholders. By scrutinizing the challenges encountered in contemporary software delivery and capitalizing on innovation principles borrowed from various domains, we can visualize a testing future that adds extra value to everyone involved.

Key Takeaways: Understanding the implicit and explicit challenges facing modern software delivery and how generic innovation principles can be applied to bring about a win-win for all — opening new career pathways for practitioners, creating business value for customers, and generating new revenue streams for tools and service providers.

About the Speaker:

With more than 34 years of experience in the Information Technology sector, Mahesh Venkataraman has held diverse roles encompassing technology management, service management, and business management.

Panel Discussion: Evolution of Testing in the Age of DevOps

Software engineering teams have warmly embraced DevOps to achieve intelligent, swift, and daily shipping. Yet, the question remains: Does this guarantee confident shipping? Amid the DevOps era, continuous testing offers solutions and hurdles as testing methodologies evolve.

Our distinguished panel of industry luminaries engaged in a conversation about this evolution, shedding light on their contributions to aiding clients in overhauling their testing approaches through DevOps.

With approximately twenty years of software testing experience, Asmita Parab is a seasoned professional dedicated to ensuring the delivery of top-notch software products. As Head of Testing at UST Product Engineering, she leads a team of skilled professionals, driving excellence in testing practices and guaranteeing software application reliability. Committed to continuous improvement, she stays updated with emerging testing trends, shaping industry best practices within the organization. Asmita’s expertise ensures high-quality standards for customers and organizational growth.

About Panelists:

  • A Business and Technology leadership veteran with over two decades of experience, Bhushan Bagi excels in scaling businesses and fostering high-performance teams. Currently overseeing Business for Quality Engineering at Wipro, he spearheads various aspects from Go-to-Market strategies to industry engagement. Bhushan’s transformational expertise spans multiple domains, making him a sought-after consultant for business and technology growth.

  • Harleen Bedi, a senior IT consultant, specializes in developing and selling IT offerings related to quality engineering and emerging technologies. She crafts and deploys QE strategies and innovations for enterprises, driving business objectives for clients. With her focus on AI, Cloud, Big Data, and more, Harleen is pivotal in aligning technology advancements with quality engineering.

  • Mallika Fernandes is an IT leader with an impressive 24-year innovation journey. As part of the Cloud First group at Accenture, she leads Quality Engineering innovation and automation, holding eleven patents for her pioneering work. Her passion for AI/ML and Cloud Quality transformation is reflected in her contributions.

  • Vikul Gupta, the Head of NextGen CoE at Qualitest, a modern quality engineering company, boasts over twenty years of experience with Tier 1 companies. His expertise spans quality engineering transformation, NextGen solutions, and co-innovation with partners worldwide. With a robust technological background encompassing AI/ML, DevOps, Cloud, and more, Vikul brings domain-specific insights to the forefront of his leadership roles.

My Crafting Project Became a Critical Infrastructure by Elizabeth Zagroba

Frustrated with the usual testing process, Elizabeth developed an APIs Python script, built and deployed the app, and printed updates in the terminal. Initially addressing my immediate needs, it unexpectedly automated a manual step in our release process.

Other teams adopted it, expanding its functionality. She managed code submissions, even those she disagreed with, to keep things unblocked. Eventually, maintaining the code became burdensome, and she stopped. However, renewed interest sparked when a merge request came in, leading to collaborative improvements and the addition of tests. This rejuvenated her enthusiasm for maintaining the script, which had grown into a vital piece of infrastructure.

Key Takeaways:

  • Good collaboration takes time and energy.

  • Small things for one use can grow into bigger things with many benefits.

  • Pick up the work for the skills you want to build.

About the Speaker:
Elizabeth serves as the Quality Lead at Mendix in The Netherlands. Her role involves enhancing exploratory testing by orchestrating collaborative “mob” testing sessions, effectively addressing gaps, and ensuring that the “it should just work” principle holds true. She fosters a shared comprehension of projects, offers critical insights, and supports team members beyond formal management channels. Additionally, she adeptly crafts API tests and communicates proficiently in English, making her a key asset in ensuring quality and cohesion within the team.

Let’s Play Rhetoric for All Things Testing by Maaret Pyhäjärvi

Remote screen sharing offers the platform for engaging in the intriguing Public Speaking Game called Rhetoric. In that adapted version, willing volunteers had the chance to take part during the session’s time frame, delivering concise two-minute talks focused on diverse testing aspects. Guided by dice rolls, players encountered a variety of speaking challenges.

  • TOPIC: Presented a framing word and six constraints to tailor the talk’s style.

  • CHALLENGE: Introduced specific speaking constraints.

  • QUESTION: Supplied audience prompts paired with six constraints.

  • REFLECTION: Granted the opportunity to speak freely on any topic.

  • CHOICE: Allowed participants to select from any of the four options mentioned.

About the Speaker:
In the past, Maaret Pyhäjärvi showcased her expertise as an exceptional exploratory tester while holding the role of Development Manager at Vaisala. She displayed proficiency as a tester, (polyglot) programmer, speaker, author, and community facilitator.

Staying Ahead In The Tech World by Rahul Parwal and Ajay Balamurugadas

During rapid technological changes, maintaining a competitive edge demands continuous updates. This talk delved into strategies that honed testing skills:

  • Building Your Toolkit: Understanding the value of a versatile toolkit and mastering tool selection amidst many options.

  • Leveraging Social Media: Discovering how staying informed through social platforms amplifies professional prowess.

  • Unlocking Automation: Exploring automation’s role, not solely in testing but also in daily tasks via micro tools.

  • Personal Insights: Gaining pragmatic insights from speakers’ experiences in tool selection and testing.

Key Takeaways:

  • Toolkit Significance: Learn to create a comprehensive toolkit with fitting tools.

  • Social Media’s Edge: Uncover how staying connected online enhances your testing prowess.

  • Automation Unveiled: Embrace automation’s power using micro tools.

  • Practical Insights: Benefit from firsthand insights to thrive in the testing tech landscape.

About Speakers:

  • Ajay Balamurugadas, known as ‘ajay184f’ in the testing community, is a seasoned expert with extensive experience redefining testing methodologies. With a distinguished background, he has co-founded Weekend Testing, authored multiple insightful books, and holds the position of Senior Director, QE at GSPANN Technologies.

  • Rahul Parwal is a proficient Software Tester and generalist. As a Senior Software Engineer at IFM Engineering in India, he specializes in testing IoT systems encompassing Unit, API, Web, and Mobile Testing. Fluent in C# and Python, Rahul’s expertise is well-rounded. He actively contributes to the testing community through various channels, sharing his insights on LinkedIn, Twitter, his blog, YouTube, and meetups.

Balancing the Test Pyramid, the AWS way!

The AWS team delved into their comprehensive testing approach, amalgamating hybrid UI and API testing with synthetic canary testing.

Their methodology responded to the challenge of balancing test coverage and efficiency while maintaining superior quality. Practical techniques and frameworks employed by AWS teams seamlessly integrated UI and API testing, boosting coverage across the software stack.

Additionally, they showcased the application of synthetic canary testing, putting real-world scenarios to the test in production to ensure operational excellence (OE) metrics coverage. By simulating actual production traffic and comparing outcomes with established benchmarks, anomalies, and potential issues were proactively identified, reinforcing system reliability and scalability.

Key Takeaways:

  • Hybrid Testing Approach: The AWS team’s hybrid testing approach, blending UI and API testing, struck a balance between test coverage and efficiency.

  • Expanded Test Coverage: Understanding how AWS leveraged hybrid testing to simultaneously validate user interface interactions and backend functionality, enhancing test coverage.

  • Operational Excellence: Gaining insights into leveraging synthetic canary testing to fortify your organization’s testing endeavors for system reliability and availability.

  • Practical Insights: Exploring the tools and frameworks that AWS teams employed in implementing the hybrid UI and API testing strategy, with actionable techniques for enhancing personal testing strategies.

About the Speaker:

Min Xu possessed substantial expertise, showcasing a robust background in quality and engineering. In her recent role as the Manager of engineering teams at AWS, her influence was significant. With over 15 years of industry experience, she contributed to Amazon’s pursuit of product quality and customer satisfaction over her five-year tenure there. Min Xu held multiple positions in quality and engineering management throughout her career.

Expect to Inspect — Performing Code Inspections on Your Automation by Paul Grizzaffi

Automation development is indeed a form of software development. Regardless of using drag-and-drop or record-and-playback tools, there’s code running behind the scenes.

Treating automation as software development is essential to avoid pitfalls. Just as in software development, code inspection plays a crucial role. In this session, Paul Grizzaffi explained the importance of code inspections for automation, highlighting differences from product software reviews and sharing real-life issues discovered during these assessments.

Key Takeaways:

  • Value of Inspections

  • Business-Driven Inspection Approach

  • Utilization of Tools

  • Illustrative Examples

About the Speaker:

Paul Grizzaffi, a Senior Automation Architect at Vaco, is passionate about his expertise in technology solutions for testing, QE, and QA realms. His role spans automation assessments, implementations, and contributions to the broader testing community.

An accomplished speaker and writer, Paul has presented at local and national conferences and is associated with Software Test Professionals and STPCon. He holds advisory roles and memberships in industry boards such as the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas.

Test Observability — A Paradigm Shift from Automation to Autonomous to Deep Observability by Vijay Kumar Sharma

The software industry has witnessed several transformations over time, often encountering disruptions every five years. Software testing, too, remained connected to the latest trends and technologies. Testing strategies aligned with agile development, rapid deployments, and heightened customer expectations for reliability and user-friendly interfaces. Like business logic, they grew swiftly and dependably.

Quality engineering (QE) processes evolved from test automation to autonomous testing, and the recent session delved into a new growth requirement: test observability. Test observability involved extracting continuous insights from automation infrastructure to guide decisions about product stability, reliability, and speed gaps in constant deployment. It also streamlined resource allocation for tests, providing a holistic system view through automated testing.

Key Takeaways: In the recently concluded session, the focus remained on value-driven testing achieved through optimal technology utilization for informed decision-making and intelligent execution.

About the Speaker:

Vijay boasts over 18 years of experience in Quality Engineering, primarily affiliated with Adobe and Sumologic. He has showcased his expertise by speaking at numerous testing conferences across India. He intend to propose a session titled ‘Test Observability and Its Significance in the Current Landscape of Rapidly Evolving Tech Enterprises.’

Advanced Strategies for Rest API Testing by Julio de Lima

Were you tired of the oversimplified view of Rest API testing? Let’s dive deeper and explore advanced strategies. He covered areas like contract testing, architecture style adherence, security, and more. Expect tools and tips to elevate your Rest API testing game, gaining insights into complex components. Gain skills to define tailored testing techniques, enhancing efficiency in planning and strategies.

Key Takeaways

  • Julio comprehensively covered crucial facets of Rest API testing, encompassing contract testing, backwards compatibility testing, adherence to Rest architecture style validation, token structure evaluation, Rest API heuristic testing, external service simulation, security testing, and performance testing.

  • He elaborated on the significance of each topic, detailing the steps for each type of testing, highlighting applicable tools, and offering illustrative examples for better comprehension.

About the Speaker:

Júlio de Lima is a specialist in Software Testing with 13 years of experience. Júlio has a Bachelor’s Degree in Software Engineering, a specialization in Teaching in Higher Education, and a Master’s Degree in Electrical and Computational Engineering with a focus on Testing and Artificial Intelligence.

A Live Intro to Python Testing by Andrew Knight

Python proved itself as an exceptional language for test automation, celebrated for its concise syntax and extensive package library. In the recently concluded session, he guided participants through the realm of Python-driven testing via live coding — an interactive experience without slides! The spotlight was on project setup with pytest and Playwright, crafting unit, API, and UI tests collaboratively. As the session concluded, attendees were well-prepared to embark on their own test automation journey with Python, armed with additional resources for further learning.

About the Speaker:

Andrew Knight, also known as “Pandy,” is the Automation Panda. He’s a software quality champion who loves to help people build better-quality software. An avid supporter of open-source software, Pandy is a Playwright Ambassador and the lead developer for Boa Constrictor, the .NET Screenplay Pattern.

Open Source for Fun and Profit: Opportunities for Personal and Professional Growth

Irrespective of your skill level, open-source projects presented distinctive avenues for knowledge-sharing and mutual learning. From crafting documentation to bug fixes and feature additions, dedicating time to open-source initiatives, you yielded short-term and lasting rewards. Did you desire to explore a new language or technology but need help determining where to begin? Or you aimed to refine your abilities and gain valuable insights from project maintainers.

The prospect of putting yourself out there could be daunting, but the rewards of expanding your network and expertise were invaluable. In the recently concluded session, the example of a Bitcoin open-source ecosystem illustrated that opportunities abound for everyone.

Key Takeaways

  • **Emphasis on Collaboration and Innovation: **The session highlighted the significance of open source in fostering collaboration and driving innovation.

  • Identifying Contribution Opportunities: Attendees learned how to identify active and well-maintained open-source projects to contribute to, enhancing their engagement in the community.

  • **Understanding Open Source Stacks: **The session provided insights into the composition and characteristics of open source stacks.

About the speaker
Felipe has been in the tech industry for almost twenty years and has been a Senior Software Engineer in Test at Netflix for the past six, where he helps build the UI delivered to millions of Smart TVs and other streaming devices around the world.

Chrome ❤ Testing

The presented talk provided an overview of the recent initiatives undertaken by the Chrome team to enhance support for testing and automation scenarios. The focus delved into “Chrome for Testing” and the newly introduced Headless mode of Chrome. This information was shared in a past session.

About the speaker

Mathias is a web standards enthusiast from Belgium who currently works on Chrome DevTools. He likes HTML, CSS, JavaScript, Unicode, performance, and security.

Quality in Digital Transformation

In the concluded panel discussion, titled ‘Quality in Digital Transformation,’ the panelist delved into the interconnectedness of quality and digital transformation. Esteemed leaders across various industries shared their perspectives on upholding standards, ensuring smooth user experiences, and mitigating risks in a dynamically shifting technological landscape. They provided insights into establishing and maintaining quality processes conducive to agile transformation and securing digital assets.

Furthermore, the discussion explored harnessing data-driven decision-making to oversee quality and performance, and the strategies to ensure quality assurance and compliance in the digital realm. The panel shed light on how quality assurance is pivotal in driving successful digital transformation for businesses.

About Panelists:

  • With over 15 years of experience in the technology industry, Anish Ohri has played a vital role in advancing various innovative products and solutions across diverse domains, including Publishing, Finance, Multimedia, e-commerce, Gaming, and Enterprise Software.

  • Manish is a Quality Engineering enthusiast known for his expertise in developing and deploying quality software. He has actively contributed to open-source projects like Puppeteer and Playwright and advocates for balanced testing strategies. His discussions revolve around testing event-driven systems, GRPC constructs, and Contract Testing.

  • Todd Lemmonds has over 20 years of experience. He is a visionary in software quality assurance. He champions early and frequent testing, driving his shift-left testing strategies. Todd emphasizes tester involvement during story refinement, integration of appropriate tests into automated pipelines, and the right test types at suitable development stages. His mission is to create an environment where testers thrive and enhance skills for modern software development.

  • Robert Gonzalez is Vice President of Engineering at SugarCRM, a prominent CRM software company. His role involves steering engineering initiatives and fostering innovation within the CRM realm. Robert leads a skilled team and contributes significantly to developing and enhancing SugarCRM’s top-tier products, ensuring superior quality, functionality, and customer contentment.

  • As Director of Quality at Hudl, Seema Prabhu drives a quality-centric culture and sets up high-performance teams. With a passion for quality and years of experience, she excels in leadership, process establishment, coaching, and mentoring. Seema advocates for efficient testing and shares her expertise through speaking engagements at meetups and conferences.

Component Testing with WebdriverIO

An informative session emphasized the growing significance of web component testing in the rapidly evolving landscape of front-end frameworks. The session shed light on how testing individual UI components has become a pivotal aspect of testing stacks, offering the advantage of thoroughly examining various features within an element. Doing so effectively reduces the reliance on end-to-end tests, which are generally slower to run.

The session, hosted by Christian Bromann, the Founding Engineer at Stateful, delved into these novel browser runner capabilities. Through engaging live demonstrations, attendees were treated to firsthand experiences of testing components in various popular frameworks, including Vue, Svelte, React, and Preact. The session showcased the remarkable ease and efficiency with which component testing can now be approached.

About the speaker

Christian Bromann is a Full-stack Engineer passionate about Open Source and Open Standards. Driven individual with the ability to adapt to any situation and proven potential to grow self and others. He is a quality-focused engineer with a background in automation technologies and test-driven development.

Test Automation with SWAG

This enlightening session addressed the crucial matter of how to effectively supply automation frameworks with an unending stream of test data. The session explored a range of solutions, including both traditional and emerging tools, that cater to the test data-driven approach. Among these, a prevalent method involves storing all input values within storage files like CSV, YAML, JSON, and more. Another viable option includes harnessing the capabilities of databases, and offering resolutions to various challenges while meeting the dynamic variable requirements for automated scripting.

A noteworthy highlight of the session was the introduction of an innovative API cloud solution. This solution simplifies the process of interfacing with multiple databases, eliminating the need for integrating drivers into various automation frameworks. Attendees were presented with a seamless way to establish communication with different databases, streamlining the process without the hassle of managing multiple drivers. The session successfully conveyed how this solution enhances the efficiency and flexibility of automation frameworks.

Key Takeaways

  • The session included insights into data-driven automation testing, the utilization of dynamic test data, the diversification of databases, and the importance of documenting test data.

About the speaker

Garvit Chandna is Head of Test Engineering at Equinox with 14 years of experience in handling globally distributed automation and manual test engineering teams. Wide experience in management and architecting complex automation frameworks.

Wrapping up Day 2!

A warm and sincere thank you to our esteemed speakers who have significantly contributed to shaping the success of Day 2 at TestMu 2023. The event was executed meticulously, showcasing insights from experienced speakers spanning the global testing community.

As we bring Day 2 to a close, we invite all to join us for Day 3, where the momentum of productivity and innovation continues unabated. Together, let’s delve into novel testing paradigms, actively engage with our community, and collectively define the future of testing.

For those who have been with us since the inception of Test Conference 2022, your unwavering support has been truly invaluable. Let’s continue the journey towards a world with minimal bugs, embracing the testing revolution by securing your spot at the LambdaTest TestMu Conference 2023.

Become a trailblazer in shaping the testing landscape. Your participation remains pivotal as we stride into Day 3, navigating the currents of technology and propelling meaningful change. We extend our heartfelt gratitude for your involvement in this remarkable event. Anticipating another extraordinary day ahead!

Stay inquisitive, stay engaged, and wholeheartedly embrace the future of testing!

The post Testμ 2023: Highlights From Day 2 appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/14/test%ce%bc-2023-highlights-from-day-2/feed/ 0
Profundezas do Node.js: Explorando I/O Assíncrono https://prodsens.live/2023/09/02/profundezas-do-node-js-explorando-i-o-assincrono/?utm_source=rss&utm_medium=rss&utm_campaign=profundezas-do-node-js-explorando-i-o-assincrono https://prodsens.live/2023/09/02/profundezas-do-node-js-explorando-i-o-assincrono/#respond Sat, 02 Sep 2023 14:25:19 +0000 https://prodsens.live/2023/09/02/profundezas-do-node-js-explorando-i-o-assincrono/ profundezas-do-node.js:-explorando-i/o-assincrono

Introdução Como o Node trata o Código Assíncrono Operações Assíncronas: O que São? Operação Assíncrona Bloqueante vs Não…

The post Profundezas do Node.js: Explorando I/O Assíncrono appeared first on ProdSens.live.

]]>
profundezas-do-node.js:-explorando-i/o-assincrono

  • Introdução
  • Como o Node trata o Código Assíncrono
  • Operações Assíncronas: O que São?

    • Operação Assíncrona Bloqueante vs Não Bloqueante
  • Experimentos com Funções Bloqueantes
  • Experimentos com Funções Não-Bloqueantes
  • Operações Assíncronas Não Bloqueantes e SO

    • Entendendo File Descriptors

      • O que é FD?
      • FD e I/O não bloqueante
    • Monitorando FDs com syscalls

      • Entendendo o select
      • Epoll
      • io_uring
  • Conclusão

Introdução

Recentemente tenho estudado sobre execução de código assíncrono no Node.js.

Acabei aprendendo (e escrevendo) bastante coisa, desde um artigo sobre como o Event Loop funciona até uma thread no Twitter explicando quem espera a requisição http terminar.

Se você quiser, pode também acessar o mapa mental que criei antes de escrever esse post, clicando aqui

Agora, vamos ao que interessa!

Como o Node trata o Código Assíncrono

No node:

  • Todo código JavaScript é executado na thread principal.
  • A biblioteca libuv é encarregada de lidar com operações de I/O (In/Out), ou seja, operações assíncronas.
  • Por padrão, o libuv disponibiliza 4 worker threads para o Node.js
    • Essas threads só serão utilizadas quando operações assíncronas bloqueantes forem realizadas, nesse caso, bloquearão uma das threads do libuv (que são threads do Sistema Operacional) ao invés da thread principal (de execução do Node).
  • Existem operações bloqueantes e não bloqueantes, a maioria das operações assíncronas atuais são não bloqueantes.

Operações Assíncronas: O que São?

Geralmente, existe uma confusão quando se trata de operações assíncronas.

Muitos acreditam que significa que algo ocorre em segundo plano, em paralelo, ao mesmo tempo ou em uma outra thread.

Na verdade, uma operação assíncrona é uma operação que não retornará agora, mas sim depois.

Elas dependem de uma comunicação com agentes externos e, esses agentes, podem não ter uma resposta imediata para sua solicitação.

Estamos falando de operações de I/O (entrada/saída).

Exemplos:

  • Leitura de arquivo: dados saem do disco e entram na aplicação.
  • Escrita em um arquivo: dados saem da aplicação e entram no disco.
  • Operações de Rede

    • Requisições HTTP, por exemplo.
    • A aplicação envia uma requisição http para algum servidor e recebe os dados.

Node chama libuv, libuv chama syscalls, event loop roda na thread principal

Operação Assíncrona Bloqueante vs Não Bloqueante

No mundo moderno, as pessoas não se falam a maioria das operações assíncronas não bloqueiam.

Mas peraí, isso quer dizer que:

  • a libuv disponibiliza 4 threads (por padrão).
  • elas “cuidam” das operações de I/O bloqueantes.
  • a grande maioria das operações são não-bloqueantes.

Parece meio inútil né?

Libuv worker threads ops assíncronas bloqueantes

Com esse questionamento em mente, resolvi fazer alguns experimentos.

Experimentos com Funções Bloqueantes

Primeiro, testei uma função assíncrona de uso intenso de CPU, uma das raras funções assíncronas bloqueantes do Node.

O código utilizado foi o seguinte:

// index.js
// index.js
import { pbkdf2 } from "crypto";

const TEN_MILLIONS = 1e7;

// Função assíncrona de uso intenso de CPU
// Objetivo: Bloquear uma worker thread
// Objetivo original: Gerar uma palavra-chave
// O terceiro parâmetro é o número de iterações
// Nesse exemplo, estamos passando 10 milhões
function runSlowCryptoFunction(callback) {
  pbkdf2("secret", "salt", TEN_MILLIONS, 64, "sha512", callback);
}

// Aqui queremos saber quantas workers threads a libuv vai usar
console.log(`Thread pool size is ${process.env.UV_THREADPOOL_SIZE}`);

const runAsyncBlockingOperations = () => {
  const startDate = new Date();
  const runAsyncBlockingOperation = (runIndex) => {
    runSlowCryptoFunction(() => {
      const ms = new Date() - startDate;
      console.log(`Finished run ${runIndex} in ${ms/1000}s`);
    });
  }
  runAsyncBlockingOperation(1);
  runAsyncBlockingOperation(2);
};

runAsyncBlockingOperations();

Para validar o funcionamento, eu rodei o comando:

UV_THREADPOOL_SIZE=1 node index.js

IMPORTANTE:

  • UV_THREADPOOL_SIZE: É uma variável de ambiente que determina quantas worker threads da libuv o Node vai iniciar.

O resultado foi:

Thread pool size is 1
Finished run 1 in 3.063s
Finished run 2 in 6.094s

Ou seja, com 1 única thread, cada execução levou ~6 segundos e elas ocorreram de forma sequencial. Uma após a outra.

Agora, resolvi fazer o seguinte teste:

UV_THREADPOOL_SIZE=2 node index.js

E o resultado foi o seguinte:

Thread pool size is 2
Finished run 2 in 3.225s
Finished run 1 in 3.243s

Com isso, está provado que as Worker Threads da LIBUV, no Node.js lidam com operações assíncronas bloqueantes.

Mas e as não bloqueantes? Se ninguém espera por elas, como elas funcionam?

Eu resolvi escrever uma outra função para fazer o teste.

Experimentos com Funções Não-Bloqueantes

A função fetch (nativa do Node) realiza uma operação assíncrona de rede e ela é não-bloqueante.

Com o seguinte código, refiz o teste do primeiro experimento:

//non-blocking.js
// Aqui queremos saber quantas workers threads a libuv vai usar
console.log(`Thread pool size is ${process.env.UV_THREADPOOL_SIZE}`);

const startDate = new Date();
fetch("https://www.google.com").then(() => {
  const ms = new Date() - startDate;
  console.log(`Fetch 1 retornou em ${ms / 1000}s`);
});

fetch("https://www.google.com").then(() => {
  const ms = new Date() - startDate;
  console.log(`Fetch 2 retornou em ${ms / 1000}s`);
});

E executei o script com o seguinte comando:

UV_THREADPOOL_SIZE=1 node non-blocking.js

O resultado foi o seguinte:

Thread pool size is 1
Fetch 1 retornou em 0.391s
Fetch 2 retornou em 0.396s

Então, resolvi testar com duas threads, para ver se mudava algo:

UV_THREADPOOL_SIZE=2 node non-blocking.js

E então:

Thread pool size is 2
Fetch 2 retornou em 0.402s
Fetch 1 retornou em 0.407s

Com isso, pude observar que:

Ter mais threads rodando na LIBUV não ajuda na execução de operações assíncronas não bloqueantes.

Mas então, voltei a me questionar, se nenhuma thread da libuv fica “esperando” a requisição voltar, como é que isso funciona?

Meu amigo, foi aí que eu caí num gigantesco buraco de pesquisa e conhecimentos sobre o funcionamento de:

Operações Assíncronas Não Bloqueantes e SO

O Sistema Operacional evoluiu bastante com o passar dos anos para conseguir lidar com operações de I/O de forma não bloqueante, isso é feito através de syscalls, são elas:

  • select/poll: Estas são as formas tradicionais de lidar com I/O não bloqueante e são geralmente consideradas menos eficientes.
  • IOCP: Usado no Windows para operações assíncronas.
    kqueue: Um método para MacOS e BSD.
  • epoll: Eficiente e utilizado no Linux. Ao contrário de select, ele não é limitado pelo número de FDs.
  • io_uring: Uma evolução do epoll, trazendo melhorias de desempenho e uma abordagem baseada em filas.

Para entendermos melhor, vamos precisar mergulhar nos detalhes das operações de I/O não bloqueante.

Entendendo File Descriptors

Para conseguir explicar I/O não bloqueante, preciso explicar rapidamente o conceito de File Descriptors (FDs).

O que é FD?

É um índice numérico de uma tabela mantida pelo kernel, onde cada registro possui:

  • Tipo do recurso (como arquivo, soquete, dispositivo).
  • Posição atual do ponteiro do arquivo.
  • Permissões e flags, definindo modos como leitura ou escrita.
  • Referência à estrutura de dados do recurso no kernel.

Eles são fundamentais para o gerenciamento de I/O.

FD e I/O não bloqueante

Ao iniciar operação de I/O não bloqueante, o Linux atrela um FD a ela sem interromper (bloquear) a execução do processo.

Por exemplo:

Vamos imaginar que você quer ler o conteúdo de um arquivo muito grande.

Abordagem bloqueante:

  • Processo chama a função ler arquivo
  • Processo aguarda o SO ler o conteúdo do arquivo
    • Enquanto o SO não terminar, o processo está bloqueado

Abordagem não bloqueante:

  • Processo solicita leitura assíncrona.
  • O SO começa a ler o conteúdo e retorna FD para o processo.
  • Processo não está travado e pode fazer outras coisas.
  • De tempos em tempos, o processo chama uma syscall para saber se a leitura acabou.

Quem define o modo como a leitura será feita é o processo, através da função fcntl com a flag O_NONBLOCK, mas isso é secundário no momento.

Monitorando FDs com syscalls

Para observar múltiplos FDs de maneira eficiente, os SOs contam com algumas syscalls:

Entendendo o select:

  • Recebe uma lista de FDs.
  • Bloqueia o processo até que um ou mais FDs estejam prontos para a operação especificada (leitura, escrita, exceção).
  • Após o retorno da syscall, o programa pode iterar sobre os FDs para identificar os que estão prontos para I/O.
  • Utiliza algoritmo de busca que é O(n).
    • Ineficiente, lento, cansado com muitos FDs

Epoll

Foi uma evolução do select, utiliza uma árvore auto-balanceada para armazenar os FDs, fazendo com que o tempo de acesso seja praticamente constance, O(1).

Chique demais!

Como funciona:

  • Cria-se uma instância do epoll através de epoll_create.
  • Associa os FDs a essa instância com epoll_ctl.
  • Usa epoll_wait para aguardar atividade em algum dos FDs.
  • Possui parâmetro de timeout.
    • Extremamento importante e muito bem utilizado pelo Event Loop da libuv!

Comparação de tempo entre select e epoll

Io_uring

Esse cara aqui veio para chutar o pau da barraca.

Enquanto o epoll evoluiu (e muito!) o desempenho de busca e apreensão dos FDs, o io_uring veio para repensar toda a natureza das operações de I/O.

E assim, depois de entender como ele funciona, fiquei me perguntando como ninguém pensou nisso antes!!!

Recapitulando:

  • select: Recebe uma lista de FDs, armazena-os sequencialmente (como um array) e verifica 1 a 1 (complexidade O(n)) para ver quem teve alteração ou atividade.
  • epoll: Recebe uma lista de FDs, armazena-os utilizando uma árvore auto-balanceada, não verifica 1 a 1, é mais eficiente, e faz o mesmo que o select só que com complexidade O(1)

Historicamente, o processo ficava encarregado de iterar sobre os FDs retornados para saber quem terminou ou não.

  • io_uring: Como é que é? Retornar uma lista? Fazer polling? Cês são burros? Já ouviram falar de filas?

Ele funciona utilizando duas filas principais, na forma de anéis (rings, daí o nome io-ring).

  • 1 para submeter tarefas
  • 1 para tarefas concluídas

Simples né?

O processo, ao iniciar uma operação de I/O, enfileira a operação utilizando a estrutura io_uring.

Aí, ao invés de chamar select ou epoll e, com os FDs retornados ficar iterando sobre cada um deles, o processo pode optar por ser notificado quando alguma operação de I/O acabar.

Polling? Não. Filas!

Conclusão

Com isso, agora eu sei exatamente qual é o caminho que o Node percorre para realizar uma operação assíncrona.

Se é bloqueante:

  • Executa a operação assíncrona utilizando a libuv
  • Adiciona a uma worker thread da libuv
  • A worker thread fica bloqueada, esperando a operação terminar.
  • Ao terminar, a thread se encarrega de colocar o resultado no Event Loop na fila de MacroTasks
  • O callback é executado na thread principal

Se não é bloqueante:

  • Executa a operação assíncrona utilizando a libuv
  • Libuv executa uma syscall de I/O não bloqueante
  • Faz polling com os FDs até que se resolvam (epoll)
  • A partir da versão 20.3.0 utiliza io_uring
    • Abordagem de filas de submissão/operações completadas
  • Ao receber evento de operação completada
    • libuv se encarrega de executar o callback na thread principal

The post Profundezas do Node.js: Explorando I/O Assíncrono appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/02/profundezas-do-node-js-explorando-i-o-assincrono/feed/ 0
GA4: Leverage the Power of Custom Reports to Enhance Your Reporting https://prodsens.live/2023/08/31/ga4-leverage-the-power-of-custom-reports-to-enhance-your-reporting/?utm_source=rss&utm_medium=rss&utm_campaign=ga4-leverage-the-power-of-custom-reports-to-enhance-your-reporting https://prodsens.live/2023/08/31/ga4-leverage-the-power-of-custom-reports-to-enhance-your-reporting/#respond Thu, 31 Aug 2023 02:25:09 +0000 https://prodsens.live/2023/08/31/ga4-leverage-the-power-of-custom-reports-to-enhance-your-reporting/ ga4:-leverage-the-power-of-custom-reports-to-enhance-your-reporting

Google Analytics 4 (GA4) is officially here. If you’re not familiar yet, GA4 is the next generation of analytics.…

The post GA4: Leverage the Power of Custom Reports to Enhance Your Reporting appeared first on ProdSens.live.

]]>
ga4:-leverage-the-power-of-custom-reports-to-enhance-your-reporting

Google Analytics 4 (GA4) is officially here. If you’re not familiar yet, GA4 is the next generation of analytics.

While it offers a bunch of new features and benefits, it can be a little daunting to learn how to use it effectively. There are numerous new reports, and finding the data you care about can take a lot of time.

In this post, I will show you a great way by which you can leverage the power of GA4 to enhance your reporting by using custom reports.

What is the need for Custom Reports?

Don’t get me wrong, the standard reports are great. They provide a lot of data. And that’s the problem. They provide a lot of data that you might not need.

  1. The default reports don’t provide you with important data: The default reports in GA4 provide a good overview of everything about your website. For example, it will tell you how many new users you’ve acquired. Or how many sessions your website had in a given timeframe. However, they may not give you the specific information you’re looking for in your SEO campaigns. To find something like how many users are coming from specific countries that are coming from organic mediums, you would have to go through a lot of reports. But if you make custom reports, on the other hand, this work would be much easier.

  2. Finding key metrics and organizing data can be challenging: Every business has different goals. Some might focus more on getting new users, and others might focus on a specific channel, while some may focus more on increasing the views on particular pages. Finding and keeping track of different metrics for different businesses is difficult in the default reports. Suppose you’re a SEO consultant handling 25 different GA4 accounts, and you need to prepare reports that will highlight the key metrics that you’re tracking. Gathering the data from default reports will take you dozens of hours and a lot of brain power.

  3. Presenting this data to stakeholders and ensuring your SEO suggestions are implemented can be tricky: The default reports are designed to show the general overview of data. This means that it will not provide you with the specific data that your stakeholders are interested in. The next thing is that the default reports have a ton of data in them. This makes the reports hard to understand. Stakeholders must understand the data because if they don’t feel confident about the data, it can be really hard to get your SEO recommendations implemented.

A great way to get the most out of your analytics data is to make custom detailed reports.

What are Custom Reports?

Custom detailed reports are like default reports, except you make them using a template or entirely from scratch. Here, you can add any dimension or metric, filters, and more.

Making custom detailed reports is a great way to get deeper insights into your analytics data, and they can help you make informed decisions, which you can learn more about in my new GA4 course.

How can Custom Reports help you get the data you care about?

Custom detailed reports can help you be more productive by organizing all your important metrics in one place. This means that if you handle more than one GA4 account, it will help you analyze and understand data better in less time.

  1. Helps you focus on important metrics: GA4 allows you to create custom reports that help you focus on metrics you care about. You can add your preferred dimensions, metrics, filters, and charts. These will help you to customize the report according to your needs. So, for example, if your business is focusing on getting organic traffic from a specific country, then you can make a report that will show you this data. This will save you time, and you can focus on the metrics you care about.

  2. Get an accurate picture of your campaigns: By making custom reports, you can get granular data for a single or a group of pages. By tracking specific metrics and dimensions, you can identify trends and data that you might not get otherwise. This will help you understand what’s working and what’s not. This would mean that you can focus on making your campaigns better.

  3. Takes less time to set up and access compared to standard reports: If you were to get the specific data from the standard report, it would take you a lot of time. Making custom reports can take a few minutes to a few hours, depending on how many reports you want to make. And once your reports are made, you can publish them, and then you can access them from the sidebar. This will allow you to access important metrics quickly.

  4. Make better decisions with the data: When you have access to custom reports, you can understand what aspect of your campaign is performing best, and what’s not. What pages get more engagement, and which pages get the most views? Having access to this data means that you are making data-based decisions and not shooting in the dark. This will help your campaigns to perform well, and you can fix the issues you’re encountering.

  5. Feel confident in getting your recommendations implemented: When you make custom reports, you only include a few metrics. This will make your stakeholders understand the data better. And as they can understand what needs to improve, you can feel confident that your SEO recommendations are getting implemented.

Steps to create custom reports

Here’s how you can create a custom report:

  1. From the left menu, click Reports

  2. Navigate to Library, and click it (Note that you should be an editor or administrator to access the library option)

  3. In the reports section, click on “Create new Report”

  4. Click “Create Detail Report”

  5. Here, you have two options. You can either create a report entirely from scratch or use any template.

  6. Add your desired metrics, dimensions, filters, and click save

For example, your SEO campaign is focusing on generating organic traffic. And your goal is to find out what countries are generating the most amount of organic traffic.

You can do this by creating a report from the demographics template. When you click the demographics detail template, you will see a lot of data that is optional for you.

You can remove extra dimensions and metrics that are not important.

For this particular example, we want to keep the dimension as “country” and the metric as “total users.” This will give us all sorts of data like organic and referral. So, we will make a filter that specifies the session medium as organic.

This will only show us the total users coming from different countries from the organic medium.

But what if you want to know what those organic websites are? What if you want to know whether your traffic is coming from Google, Bing, or any other organic medium?

The second dimension can help you here. Click the little “+” icon on the right of the first dimension, “Country.”

Click the 'plus' icon to choose a dimension in GA4

Select traffic source and then select session source.

This will give you specific data about the total users coming from:

  1. Different countries

  2. By a particular organic medium

report showing users coming from a particular country or origin

You can add additional metrics to understand the data better.

choose a line chart or a bar chart to suit your needs

As you can see, there will be two different charts, one is a line chart, and the other one is a bar chart. Whichever suits your needs, you can choose it.

You can also create summary cards according to the report so that you can see them in the overview reports.

This particular report will help you understand a specific metric. It will ensure that you and your stakeholders can make informed decisions.

When you have published a report, you can do things like:

  • Change the data range to find data over a period of time

  • Do MoM or YoY comparisons

  • Build a comparison to compare the data against other dimensions

  • Add filters to include or exclude a dimension

  • Share your reports with your teammates or stakeholders.

How to add metrics and dimensions to Custom Reports?

If you’re working with a template, then you will have predefined metrics and dimensions present in the report.

In every custom report, you can access the dimensions and metrics from here:

Customize your report with dimensions and metrics

For example, the above image shows the demographics detail template. To check the dimensions, we will click on ’Dimensions’ and we will then be able to see all the default dimensions, which will look like this:

A list of all default dimensions in GA4 custom reports

You have the freedom to choose any dimension and metric you want. When you’re making custom reports, you won’t require all dimensions and metrics, it depends on your goals and what you actually want to see in a report. You can remove and add any dimension from the ‘Add dimension’ dropdown.

If you want to make any dimension a default, then you can click the three dots to the right and choose “Set as default”. This will make it the default dimension in the report.

Set a dimension as default in GA4

In the above example, we only want the ‘Country’ as our dimension. So you can remove all other dimensions by clicking the three dots and selecting “Remove”.

Remove dimensions by selecting the three dots on the right and then 'remove'

Similarly, when you click on metrics, you will get all the default metrics like:

A list of default metrics in GA4

As you saw above, we want to choose ‘total users’ as the main metric, but we don’t see it in the default metrics list. To find the total users, you can click on “Add metric,” and from there, you can select total users and add it to the list.

You’ll see that there’s a small arrow beside the ‘Users’ metric. This means that this is the primary metric, and, by default, it will sort the data from highest to lowest. If you want to make another metric as the default, then you can just click on that metric, and it will become the default one. And if you click on it again, it will sort from lowest to highest.

You don’t want to show every single metric in your custom report, so you can remove the ones you don’t want by clicking the cross on the right of the metric.

Standard reports in GA4 are only made to show general data. Finding and accessing important metrics can take some time, especially when you’re handling a lot of projects. Also, getting your SEO recommendations implemented is not always guaranteed with standard reports, as it can be hard for executives and stakeholders to understand the data correctly.

That’s where custom reports come in. You can add all of your important dimensions, metrics, filters, and more to get the data you care about. This will aid you in making informed decisions, and stakeholders will be also able to understand the data better, as it will be organized and presented in a better way.

The post GA4: Leverage the Power of Custom Reports to Enhance Your Reporting appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/08/31/ga4-leverage-the-power-of-custom-reports-to-enhance-your-reporting/feed/ 0