automation Archives - ProdSens.live https://prodsens.live/tag/automation/ News for Project Managers - PMI Thu, 04 Jul 2024 09:21:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png automation Archives - ProdSens.live https://prodsens.live/tag/automation/ 32 32 The Great Debate: What You Need to Know about AI in Sales https://prodsens.live/2024/07/04/the-great-debate-what-you-need-to-know-about-ai-in-sales/?utm_source=rss&utm_medium=rss&utm_campaign=the-great-debate-what-you-need-to-know-about-ai-in-sales https://prodsens.live/2024/07/04/the-great-debate-what-you-need-to-know-about-ai-in-sales/#respond Thu, 04 Jul 2024 09:21:34 +0000 https://prodsens.live/2024/07/04/the-great-debate-what-you-need-to-know-about-ai-in-sales/ the-great-debate:-what-you-need-to-know-about-ai-in-sales

In this episode of the SaaS Revolution Show, we’re live from the SaaStock USA scale stage where Jake…

The post The Great Debate: What You Need to Know about AI in Sales appeared first on ProdSens.live.

]]>
the-great-debate:-what-you-need-to-know-about-ai-in-sales

In this episode of the SaaS Revolution Show, we’re live from the SaaStock USA scale stage where Jake Dunlap (CEO, Skaled) and Kevin “KD” Dorsey (Sales Leadership Accelerator & Consultant) go head to head in their session on ‘The Great Debate: What You Need to Know about AI in Sales’.

Ready for a debate? KD and Jake share what you NEED to know when it comes to AI and sales. The future of sales is human powered by AI, and these two renowned sales experts will arm and argue the knowledge and skills you need to thrive in this ever-evolving era. Through the lens of friendship, leadership and sales enablement, these two will debate it out together.

Listen to the full episode, watch the video below and subscribe to the SaaS Revolution Show podcast today.

Watch now, or listen to the audio-only version below:


 

Listen to the audio now:


If you want similar tips and are looking to achieve success all year round, check out the SaaStock Founder Membership:

A private community of ambitious SaaS founders scaling to $10MM ARR. Get a support network of peers, connect with likeminded founders around the globe, and learn proven strategies from industry experts. Apply now to scale up your SaaS.


 

Want to join the pioneers at the forefront of The SaaS revolution? Subscribe to our newsletter today to get exclusive content, receive actionable value-based insights and create your Rocketship SaaS.

Plus – follow us on social! Check out our profiles on LinkedIn, X, Facebook, YouTube, Instagram, and TikTok.

And if you can’t wait another week for our next podcast, listen to our previous two here:

The post The Great Debate: What You Need to Know about AI in Sales appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/07/04/the-great-debate-what-you-need-to-know-about-ai-in-sales/feed/ 0
The Future of Construction Labor Embracing Automation and AI https://prodsens.live/2024/06/28/the-future-of-construction-labor-embracing-automation-and-ai/?utm_source=rss&utm_medium=rss&utm_campaign=the-future-of-construction-labor-embracing-automation-and-ai https://prodsens.live/2024/06/28/the-future-of-construction-labor-embracing-automation-and-ai/#respond Fri, 28 Jun 2024 21:20:28 +0000 https://prodsens.live/2024/06/28/the-future-of-construction-labor-embracing-automation-and-ai/ the-future-of-construction-labor-embracing-automation-and-ai

The Future of Construction Labor: Embracing Automation and AI The construction industry is on the brink of a…

The post The Future of Construction Labor Embracing Automation and AI appeared first on ProdSens.live.

]]>
the-future-of-construction-labor-embracing-automation-and-ai

The Future of Construction Labor: Embracing Automation and AI

The construction industry is on the brink of a transformative era, driven by the rapid advancements in automation and Artificial Intelligence (AI). Traditionally notorious for its labor-intensive processes and susceptibility to delays and errors, construction is now being reimagined through the lens of technological innovation. Let’s dive into how automation and AI are reshaping construction, enhancing efficiency, safety, and sustainability at unprecedented scales.

The Automation Revolution

Robotics on the Rise

Robotic systems are no longer confined to manufacturing plants; they are breaking ground in construction sites. Advanced robotics can perform a myriad of tasks such as bricklaying, welding, and even 3D printing of entire structures. Take the example of SAM100 (Semi-Automated Mason), a bricklaying robot that can lay up to 3,000 bricks a day—dramatically outpacing human capabilities while maintaining superior precision.

Autonomous Vehicles

Construction sites are abuzz with autonomous equipment and vehicles. Autonomous haul trucks, excavators, and bulldozers are now a common sight, seamlessly navigating through the intricate landscapes of construction zones. Equipped with GPS, LiDAR, and advanced sensing technologies, these machines ensure optimal efficiency and safety, reducing human errors and operational costs.

Prefabrication and Modular Construction

Automation extends beyond the onsite hustle. Modular construction and prefabrication are gaining traction, where building components are produced in a controlled factory environment and then assembled onsite. This method significantly cuts down waste, shortens project timelines, and ensures higher quality control. Companies like Katerra are leading this change, proving that entire homes can be assembled in mere days!

The AI Infusion

Predictive Analytics and Project Management

AI-powered predictive analytics are game-changers in project management. By analyzing vast datasets from past projects, AI can forecast future trends, identify potential risks, and optimize scheduling. Systems like BuildOS offer real-time project insights, allowing managers to make data-driven decisions, reduce downtime, and stay within budget.

Design Optimization

AI isn’t just streamlining construction processes; it’s revolutionizing the way we design buildings. Generative design algorithms, like those in Autodesk’s BIM 360, enable architects and engineers to input design goals and constraints, with the system generating a multitude of design alternatives. This technology not only fosters creativity but also ensures the final design is cost-effective and sustainable.

Enhanced Safety Protocols

Safety is paramount in construction, and AI is making strides in this area too. Wearable sensors, paired with AI analytics, can monitor workers’ health metrics and environment conditions in real-time. Drones equipped with AI can conduct site inspections, identifying potential hazards before they become problematic. The Smartvid.io platform, for instance, uses AI-powered analysis of video footage to flag safety violations and improve site safety protocols.

The Road Ahead: Challenges and Opportunities

Skill Development and Workforce Evolution

As automation and AI become integral to construction, the demand for tech-savvy professionals will surge. This shift presents a unique challenge and opportunity: reskilling the existing workforce to manage, operate, and maintain advanced machinery and AI systems. Training programs focusing on these new technologies will be critical in ensuring a smooth transition.

Ethical and Economical Considerations

While the benefits are evident, the rise of automation and AI also brings ethical and economic questions to the forefront. There’s a prevailing concern about job displacement; however, history shows us that technological progress often creates new roles even as it renders others obsolete. A balanced approach, keeping human workers at the center of this transformation, is essential.

Sustainable Construction

Automation and AI are also at the heart of sustainable construction practices. AI can optimize the use of materials, reducing waste and promoting recycling. Robotics and autonomous systems enable precise execution, minimizing environmental impact. Green construction is no longer a buzzword—it’s becoming a reality, thanks to these tech advancements.

Conclusion

The future of construction labor is not just about replacing humans with machines; it’s about augmenting human capabilities with cutting-edge technology. Automation and AI are driving this change, promising a future where construction is smarter, safer, and more sustainable. As we embrace these innovations, the possibilities are endless, and the skyline of tomorrow will be a testament to this exciting evolution. Let’s build the future!

Feel free to share your thoughts and insights in the comments below. How do you see automation and AI shaping the construction industry in your region? What challenges and opportunities lie ahead? The conversation is just getting started!

The post The Future of Construction Labor Embracing Automation and AI appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/28/the-future-of-construction-labor-embracing-automation-and-ai/feed/ 0
Dev: Automation https://prodsens.live/2024/06/20/dev-automation/?utm_source=rss&utm_medium=rss&utm_campaign=dev-automation https://prodsens.live/2024/06/20/dev-automation/#respond Thu, 20 Jun 2024 22:20:32 +0000 https://prodsens.live/2024/06/20/dev-automation/ dev:-automation

An Automation Developer is a professional responsible for designing, developing, and implementing automated solutions to streamline processes, increase…

The post Dev: Automation appeared first on ProdSens.live.

]]>
dev:-automation

An Automation Developer is a professional responsible for designing, developing, and implementing automated solutions to streamline processes, increase efficiency, and reduce manual intervention across various domains such as software development, testing, infrastructure management, and business operations. Here’s a detailed description of the role:

  1. Understanding of Automation Concepts:

    • Automation Developers possess a strong understanding of automation principles, methodologies, and best practices.
    • They are familiar with automation frameworks, tools, and technologies used for automating repetitive tasks, workflows, and processes.
  2. Programming and Scripting Skills:

    • Automation Developers are proficient in programming languages such as Python, Java, C#, JavaScript, and scripting languages like Bash, PowerShell, and Shell Scripting.
    • They use programming and scripting languages to write automation scripts, code automation workflows, and develop custom automation solutions tailored to specific requirements.
  3. Automation Frameworks and Tools:

    • Automation Developers have expertise in using automation frameworks and tools such as Selenium, Appium, Robot Framework, Puppet, Chef, Ansible, Jenkins, Travis CI, and GitLab CI/CD.
    • They leverage automation frameworks and tools to build, deploy, and manage automated tests, deployments, configurations, and infrastructure as code (IaC) processes.
  4. Continuous Integration and Continuous Deployment (CI/CD):

    • Automation Developers implement CI/CD pipelines and workflows to automate the build, test, and deployment processes of software applications and infrastructure changes.
    • They integrate automated testing, code analysis, code quality checks, and deployment automation into CI/CD pipelines to achieve faster and more reliable software delivery.
  5. Test Automation:

    • Automation Developers specialize in test automation by creating automated test scripts, test suites, and test frameworks for functional testing, regression testing, performance testing, and load testing.
    • They use test automation tools and libraries to automate the execution of test cases, validate software functionality, and detect defects early in the development lifecycle.
  6. Infrastructure Automation:

    • Automation Developers automate infrastructure provisioning, configuration, deployment, and management using infrastructure as code (IaC) practices.
    • They define infrastructure components, environments, and configurations as code using tools like Terraform, CloudFormation, and Azure Resource Manager (ARM) templates for automated infrastructure deployment and scaling.
  7. Process Automation:

    • Automation Developers automate business processes, workflows, and tasks using robotic process automation (RPA) tools, workflow automation platforms, and business process management (BPM) software.
    • They identify repetitive manual tasks, analyze process dependencies, and design automated solutions to optimize resource utilization, reduce errors, and improve productivity.
  8. Monitoring and Orchestration:

    • Automation Developers implement automated monitoring, alerting, and orchestration solutions to manage and control automated processes, systems, and workflows.
    • They integrate monitoring tools, event-driven automation, and orchestration engines to monitor system health, trigger automated responses, and ensure system reliability and performance.
  9. Security and Compliance Automation:

    • Automation Developers incorporate security and compliance checks into automated workflows and processes to enforce security policies, standards, and regulations.
    • They automate security assessments, vulnerability scanning, access controls, and compliance audits using security automation tools and scripting techniques to mitigate risks and ensure regulatory compliance.
  10. Collaboration and Communication:

    • Automation Developers collaborate with cross-functional teams, including developers, testers, operations engineers, and business stakeholders, to identify automation opportunities, gather requirements, and implement automation solutions.
    • They communicate effectively, document automation workflows, provide training and support, and promote knowledge sharing to ensure successful adoption and utilization of automation capabilities within the organization.

In summary, an Automation Developer plays a crucial role in driving digital transformation, improving operational efficiency, and accelerating innovation by leveraging automation technologies to automate processes, tasks, and workflows across software development, testing, infrastructure management, and business operations domains. By combining technical expertise, problem-solving skills, and domain knowledge, they empower organizations to achieve agility, scalability, and competitiveness in today’s dynamic and fast-paced digital landscape.

The post Dev: Automation appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/20/dev-automation/feed/ 0
How Deutsche Telekom MMS optimizes Ansible Playbooks with Steampunk Spotter https://prodsens.live/2024/06/14/how-deutsche-telekom-mms-optimizes-ansible-playbooks-with-steampunk-spotter/?utm_source=rss&utm_medium=rss&utm_campaign=how-deutsche-telekom-mms-optimizes-ansible-playbooks-with-steampunk-spotter https://prodsens.live/2024/06/14/how-deutsche-telekom-mms-optimizes-ansible-playbooks-with-steampunk-spotter/#respond Fri, 14 Jun 2024 12:20:49 +0000 https://prodsens.live/2024/06/14/how-deutsche-telekom-mms-optimizes-ansible-playbooks-with-steampunk-spotter/ how-deutsche-telekom-mms-optimizes-ansible-playbooks-with-steampunk-spotter

Managing Ansible code quality across multiple teams and projects can be challenging. We talked to Andreas Hering, System…

The post How Deutsche Telekom MMS optimizes Ansible Playbooks with Steampunk Spotter appeared first on ProdSens.live.

]]>
how-deutsche-telekom-mms-optimizes-ansible-playbooks-with-steampunk-spotter

Managing Ansible code quality across multiple teams and projects can be challenging. We talked to Andreas Hering, System Engineer at Deutsche Telekom MMS that shared how he and his team handle the complexities of managing diverse Ansible environments with the help of Steampunk Spotter. They not only achieved significant time saving with the Spotter’s rewrite feature, but also experienced 2-4x speedup in Ansible Playbooks improvement, upgrades and maintenance compared to manual methods.

In this blog post, we’ll delve deeper into Deutsche Telekom MMS’s goals, implementation process, results achieved so far, and valuable lessons learned along the way.

Challenges of managing multiple Ansible versions

At Deutsche Telekom MMS, many teams used Ansible to automate customers’ flows, which means that different teams used various different versions of Ansible. Even though they had tools like Ansible Lint and Renovate to check their code and update Ansible, it became hard to keep their code clean and avoid duplication of roles and collections.

Where they wanted to go

“Ideally we wanted to update and upgrade all our Ansible code across all projects to the latest version,” explains Andreas.

However, with multiple customers and repositories managed by different team members, updating the code became a significant challenge.

Their goals were multi-fold:

  • enhance code quality so it is easily understandable by all colleagues in the team,
  • improve security by discouraging specific modules,
  • align with industry best practices,
  • enhance the quality of their open-source projects.

Setting up Steampunk Spotter

At Deutsche Telekom, they successfully integrated Steampunk Spotter just over 4 months ago. “When we first tested Spotter out on our code, we realized we had quite a bit of work ahead of us. For example, in one project, the average number of errors per scan and the total number of **detected errors were very high, even though we already had some mechanisms like linting in place,” says Andreas.

Image description

Initially, they needed to set up an efficient workflow. Since the team primarily uses VS Code, they decided to create a workflow using the Spotter extension for VS Code. Given their multiple customers, they wanted to distinguish the errors and track the progress of each codebase. To achieve this, they created multiple projects in Spotter. “We created a configuration file with a project ID for each customer’s project. This setup worked excellently with Spotter, as it automatically searches for the config file and uses the project ID to perform scans.”

However, they also needed a solution for a command-line interface (CLI). They developed a script that facilitates scanning from the CLI by simply typing spots followed by the path to scan, along with any additional parameters normally used with Spotter. The script requires the Spotter token and endpoint, which are exported as usual. The team set an alias for spots and defined variables to extract the project ID from the VS Code config file in the repository. If no project ID is found, an error is thrown; otherwise, a Spotter scan is performed with the specified parameters.

This script is designed to be used in a bash environment, such as Linux or Windows Subsystem for Linux (WSL). “We didn’t create a version for PowerShell or Windows command line, but users are welcome to adapt it.” In Linux, you can add it to your .bashrc file to source it at login, allowing you to use the function automatically.

Image description

Optimizing code with Spotter’s powerful features

With the help of Spotter, the Deutsche Telekom MMS team achieved significant progress in a short period. They elevated their playbooks to modern standards, fine-tuned sections of the code that were previously neglected due to the lack of time and incorporated multiple best practices across more than 10 projects.

Andreas and the team tackled common errors, such as missing fully qualified names (FQCNs) and requirement files. Spotter also identified deprecated code usage and suggested areas where implementing loops could enhance efficiency. Additionally, they optimized the use of Ansible’s copy and template modules by explicitly setting modes, a frequently encountered issue. Their efforts extended beyond internal projects – they made substantial contributions to the open-source Nomad console with a large merge request facilitated by Spotter. They also used Spotter to improve their internal open-source projects.

The Deutsche Telekom MMS team was especially satisfied with the Spotter’s rewriting feature, which they used extensively, especially in the beginning. This feature helped them easily increase code quality, significantly reducing the workload and saving a lot of time. “The total number of rewrites at the beginning was quite high, but I think this is a good sign because it is automatically done by Spotter. We managed to lower the total number of detected errors in the end. This is a valuable feature for us, and it saved us a lot of time,” explains Andreas.

Image description

Furthermore, Spotter fosters continuous code improvement. By scanning new code, it aligns it with the latest best practices, ensuring your codebase constantly evolves.

Throughout their journey, our team provided ongoing support and made every effort to ensure a seamless and enjoyable user experience. “Working with you guys was good. We requested one or two features and created bug reports, which you fixed quite fast. You were always open to help,” explains Andreas.

Spotter: Enhancing code quality and reducing stress for developers

Andreas highlighted that Spotter significantly enhances the quality of Ansible Playbooks, even when existing mechanisms are in place. Its rewriting feature saves a considerable amount of time, being 2-4 times faster than manual efforts. Spotter simplifies upgrades by checking for deprecations and offering guidance on necessary changes. It helps developers write state-of-the-art code, and although the ROI is not directly trackable, it has notably reduced stress for engineers and allowed them to focus on more enjoyable tasks 😉

“Spotter shines when it comes to writing new playbooks, following best practices and fixing errors in existing playbooks automatically. That’s a great feature and I would definitely recommend Spotter!” concludes Andreas.

Take your Ansible automation to the next level

If you want to get more details and information about Deutsche Telekom MMS’s experience, you can check out our free on-demand webinar: Optimizing Ansible Playbooks with Steampunk Spotter: Deutsche Telekom MMS’s Blueprint for Effective Automation.

And if you want to see how Spotter can optimize YOUR automation workflows, we’d be more than happy to schedule a personalized demo tailored to your specific needs. Book a demo.

You can also try Spotter in your own infrastructure, without risk. Book your test now and experience premium features, dedicated support, and comprehensive report, highlighting time and cost savings for your enterprise – all for free.

The post How Deutsche Telekom MMS optimizes Ansible Playbooks with Steampunk Spotter appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/14/how-deutsche-telekom-mms-optimizes-ansible-playbooks-with-steampunk-spotter/feed/ 0
And the nominees for “Best Cypress Helper” are: Utility Function, Custom Command, Custom Query, Task, and External Plugin https://prodsens.live/2024/04/28/and-the-nominees-for-best-cypress-helper-are-utility-function-custom-command-custom-query-task-and-external-plugin/?utm_source=rss&utm_medium=rss&utm_campaign=and-the-nominees-for-best-cypress-helper-are-utility-function-custom-command-custom-query-task-and-external-plugin https://prodsens.live/2024/04/28/and-the-nominees-for-best-cypress-helper-are-utility-function-custom-command-custom-query-task-and-external-plugin/#respond Sun, 28 Apr 2024 00:20:58 +0000 https://prodsens.live/2024/04/28/and-the-nominees-for-best-cypress-helper-are-utility-function-custom-command-custom-query-task-and-external-plugin/ and-the-nominees-for-“best-cypress-helper”-are:-utility-function,-custom-command,-custom-query,-task,-and-external-plugin

And the Oscar goes to… ACT 1: EXPOSITION   On numerous occasions, colleagues have come to me with…

The post And the nominees for “Best Cypress Helper” are: Utility Function, Custom Command, Custom Query, Task, and External Plugin appeared first on ProdSens.live.

]]>
and-the-nominees-for-“best-cypress-helper”-are:-utility-function,-custom-command,-custom-query,-task,-and-external-plugin

And the Oscar goes to…

ACT 1: EXPOSITION

 

On numerous occasions, colleagues have come to me with a question that seems to resonate among many Cypress users: Which approach is best for reusing actions or assertions when writing tests? Should they opt for a JavaScript Utility Function, a Custom Command, perhaps a Custom Query, or even one of the so-called Tasks? What about an External Plugin?

This query isn’t unique to my circle; it’s a topic that even once in a while surfaces in the Cypress.io Discord community. The myriad of methods available in Cypress can be overwhelming, but this versatility is also what makes this tool so powerful.

 

If you’re reading this blog post in the hope of finding a definitive answer as to which Cypress helper deserves the “Oscar,” then this might be the point where you choose to stop reading and check back in my next entry. After all, when it comes to movie tastes, nothing is written in stone.

However, if you’re open to exploring some recommendations and understanding when one method might be more advantageous than another, then I believe you are be in the right place.

I had the idea that presenting a sneak peek of each Cypress Helper on a ‘single screen’ might just help you decide which one to ‘watch’ the full feature film based on your mood or needs on any given day.

My hope is that by the end of this post, you will have a stronger set of strategies to enhance your Cypress toolkit for everyday use. So get the popcorn ready and enjoy the previews. 🍿

ACT 2: CONFRONTATION

 

In Cypress.io, the decision to use a JavaScript Utility Function, Custom Command, Custom Query, Task or even an External Plugin should depend on the specific needs of your test suite and the scope of functionality you are aiming to achieve.

So… getting to the point, in what situations can each of them lend you a better hand and save you a lot of work?

Let’s look at them one by one.

 

JavaScript Utility Function

 

 

  • Use JavaScript Utility Functions for simple, synchronous operations that don’t need to interact with the Cypress command chain.

  • Ideal for data transformations, calculations, or any generic JavaScript functions.

  • You could define them directly in your test spec for cases where the operation is very specific to the scope of those tests, or wrap several of them in a helper file in the cypress/support folder if you plan to use them across multiple test specs.

There is nothing preventing you from returning a Cypress chainable object from the call to JavaScript Utility Functions, and in certain cases, it might even be quite convenient.

However, given that the majority of JavaScript code is synchronous, and considering that the nature of Cypress commands is asynchronous and they get queued for execution at a later time, you might want to consider using a Custom Command instead of a JavaScript Utility Function when returning a Cypress chainable object.

 

Custom Command

 

 

This is when things start to get interesting…

  • Use Custom Commands to create reusable sets of Cypress commands that can be called as a single command.

  • They are asynchronous, and they can return a Cypress chainable object, over which you can run assertions.

  • A Custom Command can: start a new Cypress chain (called a parent command), receive a previous subject and continue an existing chain (called a child command), or either start a chain or use an existing chain (called a dual command).

  • Added via Cypress.Commands.add() and becomes part of the Cypress chainable interface. You can also overwrite existing commands using Cypress.Commands.overwrite().

  • They are defined in the cypress/support/commnads.js and are extremely helpful for actions performed frequently in your tests, like custom login procedures or form submissions.

  • They can be reused across multiple test specs in your Cypress project.

  • BE AWARE! Custom Commands are executed once and do not have built-in retry-ability. If you want your method to have retry-ability, it is better to use a Custom Query.

If you would like to dig deeper in the intricacies of Custom Commands you can visit the Cypress documentation Custom Commands, Building Cypress Commands, and Custom Cypress Command Examples.

 

Custom Query

 

 

  • Custom Queries were a special ‘feature film’ introduced in the Cypress series at the end of 2022, debuting in version 12.

  • Use Custom Queries to query the state of your application, for instance, to find elements based on custom logic or conditions not covered by Cypress build-in queries. Examples of built-in queries include get(), find(), filter(), url(), and window().

  • They can be particularly useful when integrating with UI libraries or frameworks that require specific selectors or patterns to interact with their components.

  • Queries are synchronous and can return a Cypress chainable object, over which you can also run assertions.

  • Custom Queries are retry-able, meaning they will continuously attempt to retrieve whatever you have requested until they succeed or a timeout is reached. It is important that the Custom Query callback function does not change the state of your application.

  • New queries are added via Cypress.Commands.addQuery(), but you can overwrite an existing query using Cypress.Commands.overwriteQuery().

  • They are defined in the cypress/support/commands.js file and are useful when you can encapsulate complex or repeated DOM queries that you will reuse across your test framework.

  • However, for repeatable behavior, it is often more efficient to write a JavaScript Utility function rather than a Custom Query, and after all both behave “synchronously” (by default, JavaScript is a synchronous).

  • TAKE CAUTION! If your method needs to be asynchronous or only to be called once, then you should write a Custom Command instead.

  • STAY ALERT ! When piecing together lengthy sequences of queries, ensure that you avoid incorporating standard Cypress commands, as their inclusion will disrupt the test’s ability to retry the full chain.

For more information about Custom Queries you can visit the Cypress documentation Custom Queries and Retry-ability.

 

Task

 

 

  • Tasks are used for handling operations that need to be executed outside the browser context.

  • They run in Node.js, executed by the Cypress process, and can be invoked from your tests using the cy.task() command. This bridges the gap between the Node.js server and browser-based tests, enabling a more comprehensive testing strategy.

  • Tasks are ideal for database operations such as seeding, querying, or cleanup. They are also useful for file system interactions, such as downloading files, or any server-side feature not accessible within the browser.

  • Additionally, they are great for storing state in Node that needs to persist between spec files, running parallel tasks like making multiple HTTP requests, and for executing an external process or system command.

  • The Task event handler can return a value or a promise. Returning undefined, or a promise resolved with undefined, will cause the command to fail.

  • Tasks are typically defined in the project’s cypress.config.js file within the setupNodeEvents() function. Alternatively, you can define them in the project’s cypress/plugins/index.js file.

For more information about Tasks you can visit the Cypress documentation Tasks and Real World Example tasks.

 

External Plugin

 

 

  • Cypress External Plugins are used to extend the functionality of Cypress tests beyond the capabilities provided by the core framework.

  • These plugins are typically installed via Node Package Manager and configured within the Cypress project to provide additional capabilities tailored to specific testing needs. They help make Cypress a more powerful and versatile tool for end-to-end testing.

  • You can host your External Plugins on multiple repositories such as GitHub or Bitbucket, and distribute them publicly via the software registry NPM or internally within your organization via Nexus.

  • They are extraordinarily useful when you have common tools, commands, and assertions that will be reused across multiple Cypress frameworks.

  • There is a vast array of Cypress External Plugins available; some are very well-maintained and supported by the Cypress community, while others… are not maintained at all.

  • Therefore, be selective and critical when choosing a Cypress plugin for your application. I recommend opting for plugins that are clean, light-weighted, frequently updated, and supported by credible creators. After all, each time you load a plugin in your test or framework, you are adding time to your test run.

 
Some common uses for External Plugins in Cypress include:

✔ Visual Testing (such as Applitools’ cypress-eyes)

✔ Accessibility Testing (such as Andy’s cypress-axe, which utilizes Deque’s axe-core).

✔ Reporting (such as Yousaf’s cypress-multi-reporters for generating more informative and styled test reports)

✔ API Testing (such as Filip’s cypress-plugin-api)

✔ A Toolkit of useful extra Query Commands (such as Gleb’s cypress-map)

✔ Firing native system events (such as Dmitriy’s cypress-real-events)

✔ Filtering Tests (such as @cypress/grep)

For more information about available External Plugins, you can visit the Cypress documentation on Plugins (however this list seems a little dated).

ACT3: RESOLUTION

 

Each of these Cypress Tools serves a different purpose and can be used in conjunction to create a robust and maintainable testing suite. So, in my opinion, ALL OF THEM truly deserve to share the “Oscar” for “Best Cypress Helper”. 🏆

 

JavaScript Utility Functions, Custom Commands and Custom Queries are primarily about organizing code within your tests, while Tasks and External Plugins are for interacting with the system and environment outside the browser or for extending Cypress’s capabilities.

Custom Commands can enhance the readability of your tests by tackling repetitive sets of commands at once. Similarly, Custom Queries can abstract the querying logic, making the tests easier to understand at a glance.

Remember to use Custom Commands and Custom Queries judiciously, as each addition to your testing framework increases the maintenance overhead and can potentially introduce complexity. Keep them well-documented and ensure they provide clear value over the standard set of queries provided by Cypress.

However, I have to say that Cypress External Plugins hold a special place in my 💖, and that’s why I will dedicate a full blog post to them in the future.

 

Disclaimer: Keanu Reeves has neither won an Oscar, Golden Globe, or Emmy, and nor even been nominated for any of these awards. Maybe in his next John Wick feature film! 🤞 😉

(Image from Marvelous Videos)

The post And the nominees for “Best Cypress Helper” are: Utility Function, Custom Command, Custom Query, Task, and External Plugin appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/04/28/and-the-nominees-for-best-cypress-helper-are-utility-function-custom-command-custom-query-task-and-external-plugin/feed/ 0
Why backend engineers should make CLI’s https://prodsens.live/2024/03/25/why-backend-engineers-should-make-clis/?utm_source=rss&utm_medium=rss&utm_campaign=why-backend-engineers-should-make-clis https://prodsens.live/2024/03/25/why-backend-engineers-should-make-clis/#respond Mon, 25 Mar 2024 23:20:46 +0000 https://prodsens.live/2024/03/25/why-backend-engineers-should-make-clis/ why-backend-engineers-should-make-cli’s

In this article, I am going to focus on why I think people are missing out by not…

The post Why backend engineers should make CLI’s appeared first on ProdSens.live.

]]>
why-backend-engineers-should-make-cli’s

In this article, I am going to focus on why I think people are missing out by not making CLIs. Almost everyone I know starts with web development because it is one of the things that gives results fast, which makes us go “me best, me genius,” and that’s exactly why I am still learning about web development. But there comes a point where you just want to be a developer rather than a web developer.

It takes too much time

I wanted to be able to zip my files without using 7zip because I didn’t want to learn about their CLI, which is complicated for no reason, and because of this, I always had to open the GUI. 

Now if I made a website, that would just be too excessive because I am making a server with Go, having my UI with HTMX, containerizing my application in a docker, using nginx, and let’s encrypt to have https, and finally using docker-compose to have them in one container to run on my Linux server on the digital ocean.

Or I can think about the functionality that I wanted, the kind of input I wanted to support, learning about how to read a file, and later how to walk a whole directory with Go recursively, and figure out that there is a small thing about using backslashes with the zip.Writer() function. 

In the latter case, I was focused on the thing I wanted to learn rather than all the other junk that stopped me from actually getting into the thing that I wanted to learn. 

Making an entire website with all the bells and whistles for a little feature that you want to develop and figure out is too much. Just make a CLI. 

Reusable code

Another benefit of making a CLI is that you can reuse your code, i.e., if you decide to make a CLI that takes a video and converts it into different formats, something that YouTube does to deal with the varying internet bandwidth of its users. 

After making the CLI, that part of the puzzle is now demystified, which means you can use it in a project that has videos; it almost acts as a microservice. 

The Bliss of Finishing

Everyone has a lot of unfinished projects, which is completely fine, but sometimes your resume needs to have some complete projects that have the appropriate readme, which people can either install or see because that gives you credibility.

Remember, sometimes you have to finish the thing that you start. CLIs, in my perspective, make the probability of finishing higher because the problem you are going to tackle is by nature going to be small in scale. This also has the byproduct of giving you confidence, which is rather helpful, especially when you are starting. 

It is cool

My favorite thing about making a CLI is that any time you use the thing that you made and it just works, it makes everything you work for seem worth it.

My advice is that when you feel overwhelmed by the things that you feel like you have to do or just feel like programming is losing its spark, just make something. It doesn’t have to be grand; it only needs to be something that you want. It is by far the quickest way to feel the fire again, and CLIs are just a nice little middle-ground thing that you will use and will need. 

Final Thoughts

If you haven’t made a CLI yet, I would like to say just try it, or maybe start by customizing your command line to look the way you want to. You spend a lot of time there, so having ownership over the things you use is important. Having fun is also important. 

Note: I got this idea from the freecodecamp.org video about Getting a job It is a great video, and you should check it out if you are trying to get a job. 

If anyone wants to have a conversation about tech in general or talk about some of the cool CLI they’ve made for themselves, I’d love to talk about it. My information is on my profile.

The post Why backend engineers should make CLI’s appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/03/25/why-backend-engineers-should-make-clis/feed/ 0
Launching Crawlee Blog: Your Node.js resource hub for web scraping and automation. https://prodsens.live/2024/02/27/launching-crawlee-blog-your-node-js-resource-hub-for-web-scraping-and-automation/?utm_source=rss&utm_medium=rss&utm_campaign=launching-crawlee-blog-your-node-js-resource-hub-for-web-scraping-and-automation https://prodsens.live/2024/02/27/launching-crawlee-blog-your-node-js-resource-hub-for-web-scraping-and-automation/#respond Tue, 27 Feb 2024 06:21:04 +0000 https://prodsens.live/2024/02/27/launching-crawlee-blog-your-node-js-resource-hub-for-web-scraping-and-automation/ launching-crawlee-blog:-your-nodejs-resource-hub-for-web-scraping-and-automation.

Hey, crawling masters! I’m Saurav, Developer Community Manager at Apify, and I’m thrilled to announce that we’re launching…

The post Launching Crawlee Blog: Your Node.js resource hub for web scraping and automation. appeared first on ProdSens.live.

]]>
launching-crawlee-blog:-your-nodejs-resource-hub-for-web-scraping-and-automation.

Hey, crawling masters!

I’m Saurav, Developer Community Manager at Apify, and I’m thrilled to announce that we’re launching the Crawlee blog today 🎉

We launched Crawlee, the successor to our Apify SDK, in August 2022 to make the best web scraping and automation library for Node.js developers who like to write code in JavaScript or TypeScript.

Since then, our dev community has grown exponentially. I’m proud to tell you that we have over 11,500 Stars on GitHub, over 6,000 community members on our Discord, and over 125,000 downloads monthly on npm. We’re now the most popular web scraping and automation library for Node.js developers 👏

Changes in Crawlee since the launch

Crawlee has progressively evolved with the introduction of key features to enhance web scraping and automation:

  • v3.1 added an error tracker for analyzing and summarizing failed requests.
  • The v3.3 update brought an exclude option to the enqueueLinks helper and integrated status messages. This improved usability on the Apify platform with automatic summary updates in the console UI.
  • v3.4 introduced the linkedom crawler, offering a new parsing option.
  • The v3.5 update optimized link enqueuing for efficiency.
  • v3.6 launched experimental support for a new request queue API, enabling parallel execution and improved scalability for multiple scrapers working concurrently.

All of this marked significant strides in making web scraping more efficient and robust.

Future of Crawlee!

The Crawlee team is actively developing an adaptive crawling feature to revolutionize how Crawlee interacts with and navigates through websites.

We just launched v3.8 with experimental support for the new adaptive crawler type.

Support us on GitHub.

Before I tell you about our upcoming plans for Crawlee Blog, I recommend you check out Crawlee if you haven’t already.

We are open-source. You can see our source code here. If you like Crawlee, then please don’t forget to give us a ⭐ on GitHub.

Crawlee_presentation_final

Crawlee Blog and upcoming plans!

The first step to achieving this goal is to reach out to the broader developer community through our content.

The Crawlee blog aims to be the best informational hub for Node.js developers interested in web scraping and automation.

What to expect:

  • How-to-tutorials on making web crawlers, scrapers, and automation applications using Crawlee.
  • Thought leadership content on web crawling.
  • Crawlee feature updates and changes.
  • Community content collaboration.

We’ll be posting content monthly for our dev community, so stay tuned!

If you have ideas on specific content topics and want to give us input, please join our Discord community and tag me with your ideas.

Also, we encourage collaboration with the community, so if you have some interesting pieces of content related to Crawlee, let us know in Discord, and we’ll feature them on our blog. 😀

In the meantime, you might want to check out this article on Crawlee data storage types on the Apify Blog.

The post Launching Crawlee Blog: Your Node.js resource hub for web scraping and automation. appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/02/27/launching-crawlee-blog-your-node-js-resource-hub-for-web-scraping-and-automation/feed/ 0
Tracking Crypto Prices on Autopilot: GitHub Codespaces & Actions Guide https://prodsens.live/2024/02/19/tracking-crypto-prices-on-autopilot-github-codespaces-actions-guide/?utm_source=rss&utm_medium=rss&utm_campaign=tracking-crypto-prices-on-autopilot-github-codespaces-actions-guide https://prodsens.live/2024/02/19/tracking-crypto-prices-on-autopilot-github-codespaces-actions-guide/#respond Mon, 19 Feb 2024 09:20:37 +0000 https://prodsens.live/2024/02/19/tracking-crypto-prices-on-autopilot-github-codespaces-actions-guide/ tracking-crypto-prices-on-autopilot:-github-codespaces-&-actions-guide

Want to stay on top of your favorite crypto’s price, but tired of manual checks? This guide empowers…

The post Tracking Crypto Prices on Autopilot: GitHub Codespaces & Actions Guide appeared first on ProdSens.live.

]]>
tracking-crypto-prices-on-autopilot:-github-codespaces-&-actions-guide

Want to stay on top of your favorite crypto’s price, but tired of manual checks? This guide empowers you to build an automated Cryptocurrency Price Tracker with GitHub Codespaces, a cloud-based development environment, and GitHub Actions, for commit automation. Let’s dive in!

** Step 1: Secure Your Crypto Vault (Repository)**

  1. Head to GitHub and create a new private repository (e.g., “Cryptocurrency-Price-Tracker”).
  2. Name it wisely, reflecting your crypto passion!

** Step 2: Enter the Codespace Arena**

  1. Open your new repository and click “Code” -> “Codespaces” -> “Create codespace on main”.
  2. This unlocks your cloud-based development playground. Time to code!

** Step 3: Craft Your Price-Fetching Spell (Script)**

  1. In your codespace, conjure a tracker.py file.
  2. Let’s use Python and CoinGecko API to extract the desired crypto’s price:
import requests

def get_crypto_price(symbol):
    url = f"https://api.coingecko.com/api/v3/simple/price?ids={symbol}&vs_currencies=usd"
    response = requests.get(url)
    data = response.json()
    return data[symbol]['usd']

print(get_crypto_price('bitcoin'))

** Step 4: Automate the Price Updates (GitHub Actions)**

  1. Click the “Actions” tab and select “New workflow”.
  2. Choose the “Python application” template for streamlined setup.
  3. Replace the main.yml file with this magic formula:
name: Daily Crypto Price Update

on:
  schedule:
    - cron: '0 0 * * *' # Runs daily at midnight (UTC)

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2

      - name: Set up Python 3.8
        uses: actions/setup-python@v2
        with:
          python-version: 3.8

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install requests

      - name: Run the script
        run: python tracker.py

      - name: Commit and push changes
        run: |
          git config --local user.email "action@github.com"
          git config --local user.name "GitHub Action"
          git add -A
          git diff-index --quiet HEAD || git commit -m "Update crypto price"
          git push

** Step 5: Unleash the Automation!**

  1. Commit and push your code to the main branch.
  2. Visit the “Actions” tab and witness your workflow springing to life, updating prices daily!

Remember:

  • Adjust the script for your desired cryptocurrency.
  • Replace 'bitcoin' with its symbol.

Now you have a self-updating Cryptocurrency Price Tracker, powered by the magic of GitHub! Feel free to customize and explore further!

P.S.: A simple project to wish all of you: Happy coding!

The post Tracking Crypto Prices on Autopilot: GitHub Codespaces & Actions Guide appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/02/19/tracking-crypto-prices-on-autopilot-github-codespaces-actions-guide/feed/ 0
How to mass import YouTube videos into a Reddit subreddit [Python] https://prodsens.live/2024/01/19/how-to-mass-import-youtube-videos-into-a-reddit-subreddit-python/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-mass-import-youtube-videos-into-a-reddit-subreddit-python https://prodsens.live/2024/01/19/how-to-mass-import-youtube-videos-into-a-reddit-subreddit-python/#respond Fri, 19 Jan 2024 23:24:38 +0000 https://prodsens.live/2024/01/19/how-to-mass-import-youtube-videos-into-a-reddit-subreddit-python/ how-to-mass-import-youtube-videos-into-a-reddit-subreddit-[python]

We’re introducing “py-youtube-to-subreddit,” a Python tool available on GitHub. This tool was developed for importing YouTube channel and…

The post How to mass import YouTube videos into a Reddit subreddit [Python] appeared first on ProdSens.live.

]]>
how-to-mass-import-youtube-videos-into-a-reddit-subreddit-[python]

We’re introducing “py-youtube-to-subreddit,” a Python tool available on GitHub. This tool was developed for importing YouTube channel and playlist content efficiently to specific Reddit subreddits.

Key Features:

  1. Individual Video Iteration: Loop through imported videos and determine to skip or publish.
  2. Sorting Options: Organize videos by date, view count, or likes.
  3. Enhanced Commenting Feature: Automatically post the video description as the first comment. The comments are customizable using a template from the config.json file.
  4. Subreddit Post Existence Check: Verifies if the video is already posted to avoid duplicates.

Why build this?

We have a client, MetaCast Studios, who wanted to do two things:

  1. Create a subreddit to host all their full playthrough no commentary videos.
  2. Create an additional subreddit called BossFightVideos where they wanted to import post videos from boss fight playlists (and invite other redditors to publish their boss fight video productions).

We were absolutely amazed at how easy it was to accomplish this migration script. The command line experience that we coded in made for a very pleasant transfer routine and allowed us to control what was published and what was not.

All in all we transferred about 70 videos to the new /r/metacastgaming and about 25 boss fights to /r/bossfightvideos.

We did run into major issues with the moderation team because of the volume of videos we imported which we are currently hoping to work through (see disclaimer below).

Disclaimer, fear the hammer. It is real.

Banned by the ban hammer!

To our disappointment, Reddit was quick to ban the two subreddits we used this for, labeling the subreddits as Spam (ouch for us and our client!).

This may have been an automated ban, it could be a manual ban. It could be because the account age is new or there is only one measly Karama point associated with it😄. We do not know. We are currently appealing the ban because we believe the use case is honest and aspires to add value not degrade the reddit experience. But with powerful tools like these, we can understand Reddit as a platform being sensitive.

So please, if you use this asset, be(a)ware that either Reddit’s automated systems or moderators may not support the usage. We advise taking your migration routines slow and cautiously or even better, ask for permission ahead of time, if possible.

We’re also linking to Reddit’s Developer Terms page for study, if anyone is curious about how this tool relates to permissible usage.

If you like our content and code, please consider dropping a star on the github repo as well as following our socials (listed below) for more novel open source applications like this.

Lots of love.

Follow GBTI for more

Thanks for reading! If you enjoy our content, follow us on:

Twitter/X | GitHub | YouTube | Dev.to | Daily.dev | Hashnode | Blog / Discord

The post How to mass import YouTube videos into a Reddit subreddit [Python] appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/19/how-to-mass-import-youtube-videos-into-a-reddit-subreddit-python/feed/ 0
Automating Baseline Profile end-to-end on CI https://prodsens.live/2024/01/16/automating-baseline-profile-end-to-end-on-ci/?utm_source=rss&utm_medium=rss&utm_campaign=automating-baseline-profile-end-to-end-on-ci https://prodsens.live/2024/01/16/automating-baseline-profile-end-to-end-on-ci/#respond Tue, 16 Jan 2024 16:24:43 +0000 https://prodsens.live/2024/01/16/automating-baseline-profile-end-to-end-on-ci/ automating-baseline-profile-end-to-end-on-ci

Generated With AI on Bing Baseline profile allows you to pre-package a list of classes and methods with APK…

The post Automating Baseline Profile end-to-end on CI appeared first on ProdSens.live.

]]>
automating-baseline-profile-end-to-end-on-ci

A robot sitting on another robotic arm and analyzing code to select classes and methods required during app launch.
Generated With AI on Bing

Baseline profile allows you to pre-package a list of classes and methods with APK that are required during launch or in the critical user journey (Core flow of your app). Before the baseline profile, this list either had to be generated on the device by analyzing a few app launches or sourced from Play Store if other devices already figured out this list and uploaded it to Play Store. This list of classes and methods is then pre-compiled to machine code so that during app launch and critical user journey this code is ready to execute by CPU thereby making the entire flow faster.

Further reading of this blog requires context on the baseline profile and unfortunately, this is out of the scope for this blog. To start with I would recommend this talk from the Android Dev Summit to get some context on the baseline profile and then continue here.

The problem at hand

Through this blog, I want to stress upon the automation part of the baseline profile. As you make changes in the code you need to update the profile so that it can include new classes and methods required either during launch or during critical user journey.

Keeping the profile up to date is only half of the problem. You need to make sure the profile is working as expected. For that, you need to run macro-benchmark tests to verify profile is reducing launch time.

We will explore possible solutions to the above problems, so let’s get started without further ado!

Automating profile generation

With baseline profile gradle plugin you can easily update profile as this plugin does the heavy lifting of building the required test APKs, running tests on the device, and copying the generated profile to the correct directory after test completes. Due to the scope of this blog, I won’t go into details about plugin setup. You would need a test module setup and plugin setup within the test module.

Assuming your plugin setup is correct, the CI setup is fairly simple. We will focus on GitHub action as our CI platform. Converting setup for other CI platforms shouldn’t be difficult.

Here is the workflow that will update the profile on every pull request. After a successful run, the workflow should update PR with the modified profile.

name: Generate Baseline Profile  

on: pull_request

jobs:
build-benchmark-apks:
name: Generate baseline profile
runs-on: macos-latest
timeout-minutes: 20

steps:
- name: Checkout
uses: actions/checkout@v3

- name: Validate Gradle Wrapper
uses: gradle/wrapper-validation-action@v1

- name: Set up JDK 17
uses: actions/setup-java@v3
with:
distribution: 'zulu'
java-version: 17

- name: Install GMD image for baseline profile generation
run: yes | "$ANDROID_HOME"https://medium.com/cmdline-tools/latest/bin/sdkmanager "system-images;android-33;aosp_atd;x86_64"

- name: Accept Android licenses
run: yes | "$ANDROID_HOME"https://medium.com/cmdline-tools/latest/bin/sdkmanager --licenses || true

- name: Generate profile
run: ./gradlew :sample:generateBaselineProfile
-Pandroid.testoptions.manageddevices.emulator.gpu="swiftshader_indirect"

You can configure the baseline profile gradle plugin to generate profile through gradle managed device. The above workflow first installs a system image for gradle managed device. Then it accepts the license for the installed system image. Finally, it runs the generation part by executing ./gradlew :sample:generateBaselineProfile command. Note android.testoptions.manageddevices.emulator.gpu test parameter we are passing. This is required to run gradle manged device on GitHub action running on macOS. Without this profile generation will fail throwing device setup error.

If you can not use baseline profile gradle plugin for some reason (required AGP 8.0 and above) then this can be tricky as there would be an additional step required to pull the profile from the device and store it in the source directory. Chris Banes wrote a workflow for this that you can use as a reference.

Automating profile verification

When it comes to automating the verification of the profile, things are a little tricky. Profile verification requires running macro-benchmark tests on physical devices. This means you need to spin up a physical device to run verification and fetch benchmark results from the device. For the scope of this blog, we will focus on the Firebase test lab.

Running instrumented tests on the Firebase test lab is not challenging at all. Many of you are already doing this with your instrumented test setup on CI. Since macro benchmark tests are instrumented tests only, this should be fairly simple. What’s challenging though is to download benchmark JSON from the device and parse it on CI to analyze the result.

While contributing baseline profile CI setup on COIL (Image loading library) I came across the same challenge. I explored all possible solutions to automate the verification part and ultimately decided to write a Gradle plugin for this.

Auto-Benchmark (Sorry for the shameless plug 😅) is a gradle plugin to automate this step as this is a bit complicated and not straightforward. The only limitation here is this plugin only supports Firebase Test Lab (FTL). It uploads benchmark APK and App APK to FTL and once tests are complete it downloads the result JSON. This JSON is then parsed and analyzed to determine the impact of the profile. The impact here is the percentage of improvement between no compilation median startup and baseline profile median startup time. Let’s see how you can utilize this plugin.

Following is the plugin configuration you need in your app-level build.gradle file.

autoBenchmark {
// A relative file path from the root to app apk with baseline profile
appApkFilePath.set("https://medium.com/sample/build/outputs/apk/benchmark/sample-benchmark.apk")

// A relative file path from the root to benchmark apk
benchmarkApkFilePath.set("https://medium.com/benchmark/build/outputs/apk/benchmark/benchmark-benchmark.apk")

// A relative file path of service account JSON to authenticate Firebase Test Lab
serviceAccountJsonFilePath.set("../../.config/gcloud/application_default_credentials.json")

// Physical device configuration map to run benchmark
physicalDevices.set(mapOf(
"model" to "redfin", "version" to "30"
))

// Tolerance percentage for improvement below which verification will fail
tolerancePercentage.set(10f)
}

It takes three file paths. Two of them are relative paths to benchmark and app APKs. To authenticate Firebase Test Lab it requires your service account JSON file path. Next, it takes the device configuration on which you want to run your benchmark. In the above configuration, it uses Pixel 5 and API level 30. Lastly, It takes the tolerancePercentage. It is the percentage of improvement below which you want to fail your profile verification. So for example, if no compilation median startup is 233 ms and baseline profile median startup is 206 ms then startup time is improved by ~11%. If the provided tolerance percentage is 10 then the task will successfully complete.

To seed the tolerance percentage I would suggest running the benchmark on the initial profile locally many times (preferably more than 5 times) and taking the minimum percentage improvement between no compilation median startup and baseline profile median startup time.

Seeding tolerance percentage like this is something I came up with. I know this is controversial as some of you might find this not a proper way to decide the baseline on your improvement. I would love to know your thoughts in case you have better suggestions on seeding tolerance percentage.

With the above simple configuration, it is just a matter of issuing one gradle command ./gradlew :app:runBenchmarkAndVerifyProfile to verify profile. For a detailed setup of this plugin, I would suggest checking the README guide.

Now that we have seen how plugin works let’s see how you can integrate the plugin in CI pipeline. Once again we will see how you can set up this on GitHub action but it should be simple to migrate this on your choice of CI.

In the above configuration, we saw we need a service account JSON file path. We need to set up that first before we run profile verification on CI. For that first store base64 encoded service account JSON file as the GitHub environment variable. Then later in the workflow, we will decode this file and put it at the root level of the CI machine file system. To encode service account JSON run the following command.

base64 -i “$HOME/.config/gcloud/application_default_credentials.json” | pbcopy

Paste the encoded JSON file to the GitHub environment variable GCLOUD_KEY. To decode this and restore it in the file system of the CI machine, run the following bash commands as a workflow step.

GCLOUD_DIR="$HOME/.config/gcloud/"  
mkdir -p "$GCLOUD_DIR"
echo "${{ vars.GCLOUD_KEY }}" | base64 --decode > "$GCLOUD_DIR/application_default_credentials.json"

The above commands decode the encoded service account JSON file from the GitHub environment variable at $HOME/.config/gcloud/application_default_credentails.json

Now let’s see the entire workflow.

name: Verify Baseline Profile  

on:
pull_request:
branches:
- release

jobs:
build-benchmark-apks:
name: Build APKs and Run profile verification
runs-on: macos-latest
timeout-minutes: 20

steps:
- name: Checkout
uses: actions/checkout@v3

- name: Validate Gradle Wrapper
uses: gradle/wrapper-validation-action@v1

- name: Set up JDK 17
uses: actions/setup-java@v3
with:
distribution: 'zulu'
java-version: 17

- name: Build benchmark apk
run: ./gradlew :benchmark:assembleBenchmark

- name: Build app apk
run: ./gradlew :sample:assembleBenchmark

- name: Set up authentication
run: |
GCLOUD_DIR="$HOME/.config/gcloud/"
mkdir -p "$GCLOUD_DIR"
echo "${{ vars.GCLOUD_KEY }}" | base64 --decode > "$GCLOUD_DIR/application_default_credentials.json"
- name: Verify baseline profile
run: ./gradlew :sample:runBenchmarkAndVerifyProfile

In the above workflow, Set up authentication step is exactly doing what we saw above to restore the service account JSON file on the CI machine for the GitHub environment variable. After setting up credentials it runs the gradle command from plugin to verify profile.

This is all you need to detect regression in your baseline profile setup. You can check both workflows we saw in my repo for reference.

One important thing that I want to highlight here is, this plugin is only to detect the performance dip below tolerance percentage and to fail fast on CI. You can go further and collect these benchmark run results to visualise on a dashboard. Monitoring macro-benchmark results on a dashboard would give you more insights into app performance over time. You can refer to this performance sample repo guide to know how it can be done through GCloud Monitoring. Also, Py ⚔ published a script that Square uses to compare two benchmark runs. Perhaps these are good starting points to build a dashboard around your macro-benchmark runs.

I hope this guide will help you to automate baseline profile on CI. I would appreciate your feedback on the plugin so please do try it out and let me know what can be improved.

Until next time! ☮ ✌

Thanks to Shreyas Patil & Ben Weiss for proofreading this.


Automating Baseline Profile end-to-end on CI was originally published in Google Developer Experts on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Automating Baseline Profile end-to-end on CI appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/01/16/automating-baseline-profile-end-to-end-on-ci/feed/ 0