Giter Club home page Giter Club logo

petermsouzajr / cy-shadow-report Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 4.0 5.42 MB

Streamline your team's testing workflow with qa-shadow-report, an open-source tool that automatically consolidates Cypress & Playwright test data into Google Sheets or CSV for enhanced visibility and team collaboration.

Home Page: https://www.npmjs.com/package/qa-shadow-report

License: MIT License

JavaScript 100.00%
automation collaboration continuous-integration cypress cypress-io cypress-plugin googlesheets javascript teammanagemant testing testing-tools visibility csv playwright playwright-javascript

cy-shadow-report's Introduction

Cypress & Playwright Reporting: Seamless Integration with Google Sheets and CSV

Our package bridges Cypress and Playwright test runs with Google Sheets or CSV, streamlining test result integration and boosting team collaboration. It not only provides immediate insights into automation project health but also leads a paradigm shift in organizational methods, promoting clearer coordination and efficiency. Note: Cypress is a registered trademark of Cypress.io, Playwright is a registered trademark of Microsoft Corporation, and Google Sheets is a registered trademark of Google Inc. This application is not officially endorsed or certified by Playwright, Microsoft Corporation, Google Inc., or Cypress.io.

Table of Contents

  1. Installation

  2. Quick Start

  3. Samples

  4. Sheets Setup Guide

  5. Sheets Enhanced Configuration

  6. Github CI/CD

  7. Demo Branch

Installation

qa-shadow-report setup guide

Upon installing qa-shadow-report, you can run the command qasr-setup which initiates a couple of Yes or No questions to guide you through setting up the tool for your testing framework and package manager. You may choose to exit the setup at any time, by entering EXIT. You will then need to manually complete the setup by following the detailed instructions provided in the Cypress or Playwright sections of this guide.

This setup process is designed to tailor the installation to your specific needs, ensuring that all dependencies and configurations are correctly established for your environment.

Quickstart

Generate Reports In CSV format

  • Use the base commands with a framework of Cypress or Playwright and the optional flag --csv to run a daily report.
    • qa-shadow-report [framework] --csv
    • qa-shadow-report [framework] todays-report --csv
  • Ensure JSON data is present from [framework] test results output, check the Prerequisites section to see a [framework] configuration.
  • A detailed summary will be downloaded into the Cypress Downloads folder cypress/downloads
  • Monthly summary reports are not currently supported in CSV format.

Samples

Sheets Daily Report

Screenshot of Feature

Sheets Monthly Summary

Screenshot of Feature

CSV Daily Report

Screenshot of Feature

Sheets setup Guide

Cypress

Before you begin, ensure you have the following packages and authentication, you can run the command qasr-setup which initiates a couple of Yes or No questions to guide you through setting up the tool for your testing framework and package manager:

  • Mochawesome and Mochawesome Merge: Usually installed by the setup wizard, these are recommended for Cypress test report generation: npm install --save-dev mochawesome mochawesome-merge.

  • Google Spreadsheet ID: Find this in your sheet's URL and store it in an environment variable.

  • Service Account Credentials for Google Sheets: Follow the detailed guide from node-google-spreadsheet they have a great document describing Google Service Accounts node-google-spreadshee: Google Service Account to set up and safely store your credentials, updating shadowReportConfig.* (js, or ts) with the path to these credentials. Use .gitignore to secure your credentials within your project.

  • qa-shadow-report configuration file: Usually installed by the setup wizard in the root of your Cypress project, named: shadowReportConfig.* (js, or ts).

    • teamNames: An array of identifiers representing different teams within your organization that may use or contribute to the testing process.
    • testTypes: Specifies the types of tests included in your project, such as API tests or UI tests, to help organize and filter test executions.
    • testCategories: Defines the categories of tests your project includes, such as smoke tests for quick checks or sanity tests for verifying vital features after builds.
    • googleSpreadsheetId: The unique identifier for your Google Sheets project, found within the URL of your Google Sheet. This is used to integrate and sync test result data.
    • googleKeyFilePath: The file path to your Google service account credentials, which are required to authenticate and interact with Google Sheets API.
    • testData: The file path to where your test results in JSON format are stored, typically generated by Cypress or another testing framework.
    • csvDownloadsPath: The directory path where the generated CSV files will be saved. This is useful for users who prefer to download and review test results in a CSV format.
    module.exports = {
      teamNames: [
        'unicorns',
        'robots',
      ],
      testTypes: [
        'api',
        'ui',
      ],
      testCategories: [
        'smoke',
        'sanity',
      ],
      googleSpreadsheetId: 'v544j5h4h456v6n',
      googleKeyFilePath: 'googleCredentials.json',
      testData: 'cypress/results/output.json',
      csvDownloadsPath: 'downloads'
    };
    

Recommended package.json Scripts

To ensure tests and reports are processed correctly, configure your package.json similarly to the following example:

  "scripts": {
    "cypress:prerun": "rm -rf cypress/results",
    "cypress:run": "npm run cypress:prerun && cypress run --headless --reporter mochawesome --reporter-options reportDir=cypress/results,overwrite=false,html=false,json=true",
    "postcypress:run": "npm run report:merge",
    "report:merge": "mochawesome-merge cypress/results/*.json > cypress/results/output.json && npm run report:generate",
    "report:generate": "qa-shadow-report cypress",
    "cypress-test": "npm run cypress:run"
  },

In this example, running npm cypress-test will

  • cypress:prerun delete all previous test run data.
  • cypress:run run all Cypress tests and add each test result to a results folder, in JSON format.
  • postcypress:run call report:merge.
  • report:merge merge individual test results into one large JSON object.
  • report:generate: Generate a report based on the Cypress test results using qa-shadow-report.

Adjust these scripts as needed for your project's requirements.

Playwright

Before you begin, ensure you have the following packages and authentication, you can run the command qasr-setup which initiates a couple of Yes or No questions to guide you through setting up the tool for your testing framework and package manager:

  • Google Spreadsheet ID: Find this in your sheet's URL and store it in an environment variable.

  • Service Account Credentials for Google Sheets: Follow the detailed guide from node-google-spreadsheet they have a great document describing Google Service Accounts node-google-spreadshee: Google Service Account to set up and safely store your credentials, updating shadowReportConfig.* (js, or ts) with the path to these credentials. Use .gitignore to secure your credentials within your project.

  • Playwright Configuration: In the playwright.config.js file, specify the reporter like this:

    // playwright.config.js
    reporter: [['json', { outputFile: 'test-results/output.json' }]];
  • qa-shadow-report configuration file: shadowReportConfig.* (js, or ts) Can be installed by the setup wizard in the root of your Cypress project, you can run the command qasr-setup which initiates a couple of Yes or No questions to guide you through setting up the tool for your testing framework and package manager.

    • teamNames: An array of identifiers representing different teams within your organization that may use or contribute to the testing process.
    • testTypes: Specifies the types of tests included in your project, such as API tests or UI tests, to help organize and filter test executions.
    • testCategories: Defines the categories of tests your project includes, such as smoke tests for quick checks or sanity tests for verifying vital features after builds.
    • googleSpreadsheetId: The unique identifier for your Google Sheets project, found within the URL of your Google Sheet. This is used to integrate and sync test result data.
    • googleKeyFilePath: The file path to your Google service account credentials, which are required to authenticate and interact with Google Sheets API.
    • testData: The file path to where your test results in JSON format are stored, typically generated by Cypress or another testing framework.
    • scvDownloadsPath: The directory path where the generated CSV files will be saved. This is useful for users who prefer to download and review test results in a CSV format.
    module.exports = {
      teamNames: [
        'unicorns',
        'robots',
      ],
      testTypes: [
        'api',
        'ui',
      ],
      testCategories: [
        'smoke',
        'sanity',
      ],
      googleSpreadsheetId: 'v544j5h4h456v6n',
      googleKeyFilePath: 'googleCredentials.json',
      testData: 'cypress/results/output.json',
      csvDownloadsPath: 'downloads'
    };
    

Recommended package.json Scripts

To ensure tests and reports are processed correctly, configure your package.json similarly to the following example:

  "scripts": {
    "playwright:prerun": "rm -rf test-results",
    "playwright:run": "npm run playwright:prerun && playwright test || true",
    "report:generate": "qa-shadow-report playwright",
    "playwright-test": "npm run playwright:run && npm run report:generate"
  },

In this example, running npm run playwright-test will:

  • playwright:prerun: Delete all previous Playwright test run data by removing the playwright/test-results folder.
  • playwright:run: Run all Playwright tests, storing each result in the test-results folder in JSON format.
  • report:generate: Generate a report based on the Playwright test results using qa-shadow-report.
  • playwright-test: Combine playwright:run and report:generate to execute the entire process in one step, running the tests and then generating the report. Make sure that your qa-shadow-report command works as expected with Playwright data, and adjust any paths or arguments to fit your specific project's setup.

To Generate Reports In Sheets

All commands require that test report data is present, in this example, the report data is generated by the testing framework.
  • To run the standard global functionality

    • Run the command qa-shadow-report [framework].
    • This command processes the data from the test results and create a detailed report.
    • A new sheet Tab will be creted with the current days title e.g Mar 24, 2024, to which this detailed report will be written.
    • If tabs exist on the Sheet for the previous month e.g. current month is April and Sheet Tabs exist for Mar 24, 2024, Mar 25, 2024, then a monthly summary will be generated with that previous months data Summary Mar 2024.
    • The report will fail if JSON test result data is not present.
    • Duplicate Sheet Tabs are not allowed, to create a duplicate tab, use the flag --duplicate.
  • To run the daily report only

    • Run qa-shadow-report [framework] todays-report.
    • Ensure JSON data is present from framework test results output.
    • Duplicate Sheet Tabs are not allowed, to create a duplicte tab, use the flag --duplicate.
    • This command will bypass the task of generating a monthly summary.
  • To run the monthly summary report only

    • Run qa-shadow-report [framework] monthly-summary.
    • Ensure daily reports from the previous month are present, otherwise no summary will be generated.
    • Duplicate Sheet Tabs are not allowed, to create a duplicate tab, use the flag --duplicate.
    • This command will bypass the task of generating a daily report.

To Generate Duplicates

  • Use the base commands with the optional flag --duplicate
    • qa-shadow-report [framework] --duplicate
    • qa-shadow-report [framework] todays-report --duplicate
    • Monthly summary dupliactes must be created directly, using the command qa-shadow-report [framework] monthly-summary.

Quick Command Reference

  • qa-shadow-report [framework] or qasr [framework] - Generates a monthly and daily report in sheets, if none exist.
  • qa-shadow-report [framework] todays-report - Generates todays report in sheets, if none exist.
  • qa-shadow-report [framework] monthly-summary - Generates a monthly summary in sheets, if none exist.
  • qasr-setup - Initiates the setup process fo reither Cypress or PLaywright, NPM or Yarn
  • --csv - Outputs the test results in cypress/downloads folder in csv format, if none exist.
  • --duplicate - Allows duplicate daily reports to be created.
  • --help - Outputs a summary of available commands and their usage.

Sheets Enhanced Configuration

Column: Team

If you have team names or labels indicating ownership of a test or code feature, you need to specify them to ensure visibility on the report sheet. Add them to your shadowReportConfig.* (.js, or.ts) file:

module.exports = {
  teamNames: [
    'oregano',
    'spoofer',
    'juniper',
    'occaecati',
    'wilkins',
    'canonicus',
  ],
  googleSpreadsheetId: 'v544j5h4h456v6n',
  googleKeyFilePath: 'googleCredentials.json',
  testData: '[framework]/results/output.json',
  csvDownloadsPath: 'downloads'
};

The Team Name column aggregates and displays data based on the team names you define. Include the team name within square brackets in the describe block string to identify the team responsible for the feature code. For instance, [Windsor] is used as the team name in this example:

describe('[Windsor] Unit test our math functions', () => {
  context('math', () => {
    it('can add numbers [C2452][smoke]', () => {
      expect(add(1, 2)).to.eq(3)
    })

    it('can subtract numbers [C24534][smoke]', () => {
      expect(subtract(5, 12)).to.eq(-7)
    })

    it('can divide numbers [C2460]', () => {
      expect(divide(27, 9)).to.eq(3)
    })

    it('can multiply numbers [C2461]', () => {
      expect(multiply(5, 4)).to.eq(20)
    })
  })
})

This configuration allows for a more organized and comprehensive report, showcasing test ownership and facilitating team-specific analysis. If you do not specify Team Names, there will be no metrics reported regarding Teams.

Column: Type

The Type column compiles and categorizes data based on predefined categories. To ensure visibility on the report sheet. Add them to your shadowReportConfig.* (.js, or.ts) file. If you do not specify a list of Test Targets, the reporting software will use the default list, and will only compile metrics based on the default list of: ["api", "ui", "unit", "integration", "endToEnd", "performance", "security", "database", "accessibility", "mobile"].

module.exports = {
      teamNames: ['oregano'],
      testTypes: [
        'api',
        'ui',
        'unit',
        'integration',
        'endToEnd',
        'performance',
        'security',
        'database',
        'accessibility',
        'mobile',
      ],
      googleSpreadsheetId: 'v544j5h4h456v6n',
      googleKeyFilePath: 'googleCredentials.json',
      testData: '[framework]/results/output.json',
      csvDownloadsPath: 'downloads'
};

To incorporate a test Tpye into your [framework] report, it's essential, and highly recommended, to integrate the test Type into your [framework] file structure. This practice enhances organizational clarity within your team. For instance, in this example, 'api' is added after the e2e directory:

[framework]/e2e/api/1-getting-started/todo.cy.js

Similarly, you can structure your files for other types, such as UI or Performance:

[framework]/ui/1-getting-started/todo.cy.js

[framework]/performance/1-getting-started/todo.cy.js

This method of file organization facilitates easy identification and categorization of tests based on their target type, thereby streamlining the reporting and analysis process.

Column: Category

The Category column compiles data to represent the specific purpose of each test, based on predefined categories. To ensure visibility on the report sheet. Add them to your shadowReportConfig.* (.js, or.ts) file. If you do not specify a list of Categories, the reporting software will use the default list, and will only compile metrics based on the default list of: ["smoke", "regression", "sanity", "exploratory", "functional", "load", "stress", "usability", "compatibility", "alpha", "beta"].

module.exports = {
      teamNames: ['oregano'],
      testTypes: ['mobile'],
      testCategories: [
        'smoke',
        'regression',
        'sanity',
        'exploratory',
        'functional',
        'load',
        'stress',
        'usability',
        'compatibility',
        'alpha',
        'beta',
      ],
      googleSpreadsheetId: 'v544j5h4h456v6n',
      googleKeyFilePath: 'googleCredentials.json',
      testData: '[framework]/results/output.json',
      csvDownloadsPath: 'downloads'
    };

To indicate the purpose of a test within your [framework] suite, add the Test Purpose in square brackets at the end of the string in the it block. This annotation specifies the intended coverage of the test. For example, in this snippet, [smoke] and [usability] are used to denote Test Purposes:

describe('[Windsor] Unit test our math functions', () => {
  context('math', () => {
    it('can add numbers [C2452][smoke]', () => {
      expect(add(1, 2)).to.eq(3)
    })

    it('can subtract numbers [C24534][smoke]', () => {
      expect(subtract(5, 12)).to.eq(-7)
    })

    it('can divide numbers [C2460] [usability]', () => {
      expect(divide(27, 9)).to.eq(3)
    })

    it('can multiply numbers [C2461]', () => {
      expect(multiply(5, 4)).to.eq(20)
    })
  })
})

This approach not only categorizes your tests effectively but also adds clarity to the specific objectives they aim to achieve, thereby enhancing the insightfulness of your test reporting.

Column: Manual Case

The Manual Case column is designed to display the Manual Case ID associated to the automated test. Within the it block string in your [framework] tests, include the Manual Case in square brackets at the end of the string. This notation specifies the Manual Case linked to each particular test. For instance, [C2452] and [C24534] are examples of manual cases used in this context. You can format identifiers using a prefix of letters (or symbols like # or -), followed by one or more numbers, as in [DEV-345], [TC-34535], and [#356363]. Make sure the identifier is enclosed within square brackets.

describe('[Windsor] Unit test our math functions', () => {
  context('math', () => {
    it('can add numbers [C2452][smoke]', () => {
      expect(add(1, 2)).to.eq(3)
    })

    it('can subtract numbers [C24534][smoke]', () => {
      expect(subtract(5, 12)).to.eq(-7)
    })

    it('can divide numbers [C2460] [usability]', () => {
      expect(divide(27, 9)).to.eq(3)
    })

    it('can multiply numbers [C2461]', () => {
      expect(multiply(5, 4)).to.eq(20)
    })
  })
})

This method ensures that each test is accurately linked to its corresponding Manual Case ID, facilitating a more detailed and organized approach to test tracking, reporting, and auditing.

'The Works'

When specifying your Team Names, Test Targets, and Test Purposes, your shadowReportConfig.* (.js, or.ts) can look like this

module.exports = {
      teamNames: [
        'oregano',
        'wilkins',
        'canonicus',
      ],
      testTypes: [
        'api',
        'ui',
        'accessibility',
        'mobile',
      ],
      testCategories: [
        'smoke',
        'compatibility',
        'alpha',
        'beta',
      ],
      googleSpreadsheetId: 'v544j5h4h456v6n',
      googleKeyFilePath: 'googleCredentials.json',
      testData: '[framework]/results/output.json',
      csvDownloadsPath: 'downloads'
    };

Github CI/CD

This package is best suited for automated nightly runs, enabling teams to efficiently monitor project status and collaborate on test results every morning.

Integrating Google Sheets Credentials with GitHub Actions:

For seamless integration in GitHub Actions, as required for manual package operation, the Google Sheets credentials need to be appropriately configured. Given the length constraints of GitHub secrets, it may be necessary to compact the Google Sheets key using GPG encryption.

Steps for Secure Key Management

  1. Local Encryption of the Secret Key

    • Generate a GPG Key Pair: If not already available, generate a new GPG key pair using the command gpg --gen-key.
    • Encrypt the Secret File: For a secret file named google-key.json, encrypt it by executing gpg --output google-key.json.gpg --symmetric --cipher-algo AES256 google-key.json.
  2. Storing Encrypted Secrets in GitHub

    • Repository Storage: Include the encrypted file (google-key.json.gpg) in the repository.
    • Creating a GitHub Secret: Generate a GitHub secret named GPG_PASSPHRASE containing the passphrase used for file encryption.
  3. Decrypting the Secret in GitHub Actions

    • Workflow Modification: Incorporate steps in your GitHub Actions workflow to decrypt the secret file using the stored passphrase. The modifications should align with your project's encryption setup.

Note: A suitable GitHub Action configuration is required for this process to function correctly:

name: Nightly Cypress Test and Report

on:
  schedule:
    # Schedule to run at 00:00 UTC (You can adjust the time as needed)
    - cron: '0 0 * * *'

jobs:
  cypress-test-and-report:
    runs-on: ubuntu-latest

    steps:
    - name: Check out repository
      uses: actions/checkout@v3

    - name: Set up Node.js
      uses: actions/setup-node@v3
      with:
        node-version: '14' # Specify the Node.js version

    - name: Install GPG
      run: sudo apt-get install -y gpg

    - name: Decrypt Google Sheets Key
      run: |
        echo "${{ secrets.GPG_PASSPHRASE }}" | gpg --passphrase-fd 0 --output google-key.json --decrypt google-key.json.gpg

    - name: Install dependencies
      run: npm install

    - name: Run Cypress Tests and Generate Report
      run: |
        npm run cypress:prerun
        npm run cypress:run
        npm run postcypress:run
        npm run report:generate

Additional Notes:

  • Security: Be cautious with the passphrase and the encrypted file. If someone gains access to both, they can decrypt your secret.
  • GPG Version: Ensure that the GPG version you use locally for encryption is compatible with the version installed in the GitHub Actions runner.
  • File Paths: Adjust file paths in the script according to where you store the encrypted file and where the decrypted file is needed.

Demo Branch

For those who want to see qa-shadow-report in action before integrating it into their projects, we have set up a demo branch in the repository. This branch includes a fully configured setup where you can run and observe the report generation process.

How to Use the Demo

  1. Switch to the Demo Branch: Navigate to our repository and switch to the branch named demo.

  2. Follow the Setup Instructions: Ensure you meet the prerequisites and follow the setup steps outlined in the Setup Guide.

  3. Install Dependencies:

    • For General Use: If you're looking to use the plugin without modifying its code, you can easily install the published package from npm. Execute the following commands at the root of your project: cd [framework]-example && npm install qa-shadow-report && npm install

      This will install the qa-shadow-report package from npm along with any other required dependencies.

    • For Advanced Users (Local Development): If you are contributing to the qa-shadow-report code and need to test your changes within [framework]-example, you can use a locally linked version of the package. Run this command at the root of the project: npm link && cd [framework]-example && npm install && npm link qa-shadow-report

      This sequence of commands first creates a local link to your development version of qa-shadow-report, then sets up [framework]-example to use this local version, and finally installs any other dependencies.

  4. Run the Tests: While in the [framework]-example folder, use the [framework] command to run [framework] tests and generate reports.

  5. Observe the Results: Check the generated reports in the specified Google Sheet or CSV file.

The demo branch is an excellent resource for understanding how qa-shadow-report functions in a real-world scenario. Feel free to explore and modify the demo setup to test different configurations and features.

Copyright

© 2024 Peter Souza. All rights reserved. Users are granted the freedom to use this code according to their needs and preferences. Note: Cypress is a registered trademark of Cypress.io, Playwright is a registered trademark of Microsoft Corporation, and Google Sheets is a registered trademark of Google Inc. This application is not officially endorsed or certified by Playwright, Microsoft Corporation, Google Inc., or Cypress.io.

cy-shadow-report's People

Contributors

petermsouzajr avatar

Stargazers

Butch Mayhew avatar Christine Belzie avatar

Watchers

 avatar Adam Tungtaiya Lukman avatar

cy-shadow-report's Issues

Enhance the Effectiveness and Meaningfulness of Unit Tests

Issue Overview

We are looking to improve the quality and effectiveness of our unit tests. Currently, some of our tests may not be as impactful or meaningful as they could be. This issue aims to refine these tests to ensure they are providing maximum value, catching potential bugs, and accurately reflecting our application's behavior.

Expected Outcome

  • Review and enhance existing unit tests to ensure they are meaningful and effective.
  • Focus can be on improving a single test or a suite of tests, depending on the contributor's capacity and familiarity with the codebase.
  • The goal is to improve test reliability, cover edge cases better, and ensure tests align well with our application's real-world use.

Specifics

  • Identify tests that are either too simplistic, not covering enough scenarios, or potentially misleading in what they test.
  • Refactor these tests to enhance their effectiveness. This could include rewriting assertions, adding new test cases, or breaking down larger tests into more focused, smaller ones.
  • Ensure that refactored tests align with our testing guidelines and coding standards.

Contribution Guidelines

  • Please reference this issue in your pull request.
  • If addressing a specific test or a small group of tests, clearly state this in your pull request description.
  • Ensure all modified tests pass and provide adequate coverage.
  • Document any significant changes or decisions in your pull request for reviewers to understand your approach.

Your contributions are invaluable in helping us maintain a high standard for our project's reliability and quality. We appreciate your efforts in making our unit tests more meaningful and robust.

Bug: No Support for Windows

Current result, describe the bug

Does cy-shadow-report support windows?

I am testing the project on my windows 10 machine using the demo branch. After generating and merging the files, I ran

cy-shadow-report

but I am getting this error

node:internal/errors:497
ErrorCaptureStackTrace(err);
^
Error [ERR_UNSUPPORTED_ESM_URL_SCHEME]: Only URLs with a scheme in: file, data, and node are supported by the default ESM loader. On Windows, absolute paths must be valid file:// URLs. Received protocol 'c:'
at new NodeError (node:internal/errors:406:5)
at throwIfUnsupportedURLScheme (node:internal/modules/esm/load:216:11)
at defaultLoad (node:internal/modules/esm/load:118:3)
at ModuleLoader.load (node:internal/modules/esm/loader:396:13)
at ModuleLoader.moduleProvider (node:internal/modules/esm/loader:278:56)
at new ModuleJob (node:internal/modules/esm/module_job:65:26)
at #createModuleJob (node:internal/modules/esm/loader:290:17)
at ModuleLoader.getJobFromResolveResult (node:internal/modules/esm/loader:248:34)
at ModuleLoader.getModuleJob (node:internal/modules/esm/loader:229:17)
at async ModuleLoader.import (node:internal/modules/esm/loader:315:23) {
code: 'ERR_UNSUPPORTED_ESM_URL_SCHEME'
}

Expected behavior

Work as intended and pass data to google sheets

Steps to reproduce

Precondition:

  1. Go to '...'
  2. Click on '...'
  3. Scroll down to '...'
  4. See error

Screenshots or screen recordings

No response

Browsers

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Implement Support for Excel and CSV File Formats

Issue Overview

Our project currently lacks the ability to handle Excel and CSV file formats, which are commonly used in data processing and reporting. This issue aims to integrate support for these formats, potentially utilizing the ExcelJS library, to enhance the versatility and utility of our application.

Expected Outcome

  • Implement the ability to read and write Excel (.xlsx) and CSV (.csv) file formats.
  • Integrate ExcelJS or a similar library to handle these file formats efficiently.
  • Ensure compatibility with various versions of Excel files and CSV formats.
  • Provide a seamless and user-friendly interface for file import/export functionalities.

Specifics

  • Research and choose an appropriate library (ExcelJS or an alternative) that best suits our project's needs.
  • Develop a module or service within our application to handle Excel and CSV file operations.
  • Create functions to import data from and export data to these file formats.
  • Test compatibility with different versions and variations of Excel and CSV files.
  • The implementation should be efficient, reliable, and easy to maintain.

Contribution Guidelines

  • Fork the repository and create a feature branch for this issue.
  • Ensure that the chosen library integrates well with our existing codebase.
  • Add unit tests covering the new functionalities.
  • Update documentation to include instructions and examples on how to use the new features.
  • Ensure code quality, follow coding standards, and make sure all tests pass.
  • Submit a pull request linking to this issue for review.

This feature is essential to extend the capabilities of our project and will significantly benefit users who work with Excel and CSV files regularly. Contributions that help us achieve this functionality are highly valued and contribute greatly to the project's growth and usability.

Standardize Function Syntax to ES6 Format

Issue Overview

We have identified that most of our codebase utilizes ES6 syntax. However, there is at least one function still written in the older CommonJS format. This inconsistency can lead to potential confusion and maintenance challenges.

Expected Outcome

  • Ensure that all functions within our codebase conform to the ES6 syntax standard.
  • This will bring uniformity to our code and may improve readability and future maintainability.

Specifics

  • Locate the function(s) using the older CommonJS format.
  • Refactor these functions to the ES6 standard, ensuring that functionality remains unaffected.
  • Test thoroughly to ensure that refactoring does not introduce any regressions.

Contribution Guidelines

  • Please reference this issue in your pull request.
  • Ensure that your code follows our existing style guidelines.
  • Add comments where necessary to clarify complex logic after refactoring.
  • If multiple functions need refactoring, consider submitting separate pull requests for each to simplify review.

We welcome contributions to this issue and appreciate your help in improving the consistency and quality of our project's codebase.

Feature: GitHub Discussion

Description

There is a section in the CONTRIBUTING.md called I don't want to read this whole thing I just have a question!!!.

It is a brilliant idea to pointing contributors to create an issue when they encounter problem/bug or ask for enhancement. But not all contributors have accounts on StackOverflow.

Suggested solution

Enable the GitHub Discussion feature to allow contributors to ask questions and start discussions around the project as an alternative to StackOverflow.

Feature: Add issue templates

Description

There are sections about how to write an issue for Reporting Bugs and Suggesting Enhancements in the How Can I Contribute? section on the Contributing Guidelines. And these contain clear instructions about how to write issues.

However, there would still be a chance where contributors miss to include important information that maintainers need.

Suggested solution

  • Create issue templates for reporting bugs and suggesting enhancements. It will give these benefits:

    • Prevent contributors to miss giving all information needed.
    • Make it easier for maintainers to triage and prioritize the issues.
  • If issue templates are provided, it is safe to remove "Reporting Bugs" and "Suggesting Enhancements" sections completely from the COntributing Guide. This can reduce the length of the guide.

Feature: Add a linter/formatter for the project

Describe the suggestion

The project could benefit from a formatter and linter to run when contributors are adding their code, this can help keep the codebase consistent and more easily maintainable, especially as engineers add or improve features.

Type of feature

✨ Feature

Current behavior

Currently code is not formatted on a project level basis.

Suggested solution

Add eslint and Prettier to the project settings with two Scripts in he package.json, one to run the linter and observe errors and one to automatically fix the errors. this may be a good chance to remove the Overrides in .eslintrc that arent necessary.
Add the linter instructions in the CONTRIBUTING.md doc so everyone knows how and why they need to run this before their PR can be approved.
Add a linting step to the pull request template checklist.

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Contribution: testTriageLAbel

Describe the contribution

Tasks:
[ ] Task one description...
[ ] Task two description...
[ ] Add more tasks as needed...

Type of contribution

✨ Feature

Current behavior (if applicable)

No response

Suggested solution and implementation (if applicable)

Solutions:
[ ] Solution one description...
[ ] Solution two description...
[ ] Add more solutions as needed...

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Docs: Add Code of Conduct

Description

There is a section called Code of Conduct in the Contributing Guide.

However, there is no docs about Code of Conduct (COC) is attached anywhere in the project.

A COC is very important to have in an open source project to create and ensure a safe and welcoming community around the project and as the base of steps that maintainers will do if someone is not following it.

Suggested solution

  • If you have a COC, create a CODE_OF_CONDUCT.md in the root of your project and add your COC in it. Then link it to your README and Contributing Guide.

But

Improve PR template

Problem

In the PR template, I noticed that the instructions are not formatted in a way to convey this point to readers. Additionally, it’s missing a spot that encourages contributors to link the issue that their PR solves

Suggested Solutions

  • Comment out the instructions
  • Add Related to Tickets & Documentsafter the Description

Benefits

Adding this information would help contributors create effective PRs,make it easier for maintainers to review them, & makes the repo more organized.

Contribution: Test Issue Assignment

Describe the contribution

Tasks:
[ ] Task one description...
[ ] Task two description...
[ ] Add more tasks as needed...

Type of contribution

✨ Feature

Current behavior (if applicable)

No response

Suggested solution and implementation (if applicable)

Solutions:
[ ] Solution one description...
[ ] Solution two description...
[ ] Add more solutions as needed...

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Add Unit Tests for Uncovered Modules

Issue Overview

We have identified several modules in our codebase that currently lack adequate unit test coverage. This issue aims to address this gap by adding comprehensive unit tests for these modules. Contributors are welcome to tackle the entire task or focus on adding tests for a single module.

Expected Outcome

  • Increase the unit test coverage by writing tests for modules that currently lack them.
  • Contributors can choose to focus on a single module or multiple modules based on their familiarity and comfort with the codebase.
  • The aim is to ensure that all critical functionalities of these modules are tested and reliable.

Specifics

  • Identify modules within the codebase that do not have unit tests.
  • Write unit tests that cover key functionalities and edge cases of these modules.
  • Ensure that the new tests adhere to our project's testing conventions and standards.
  • The tests should be clear, maintainable, and well-documented to ease future modifications or enhancements.

Contribution Guidelines

  • Please reference this issue in your pull request.
  • If you're focusing on a specific module, mention this in your pull request description.
  • Ensure that all new tests are passing and do not break existing functionality.
  • Include detailed comments or documentation in your pull request to explain the rationale behind your test cases and any specific considerations.

This effort is crucial for maintaining the overall health and quality of our project. Your contributions towards improving our test coverage are greatly appreciated and will help ensure the robustness and reliability of our application.

Docs: Add "Related Issues" section in the PR template

Description

When I created an issue, I don't see a section to link the issue that related to the PR. I added it manually in the PR.

Linking a PR to an issue will give below benefits:

  • Clean organization: It's easier for maintainers to track the progress and history of the project because every PR is linked to an issue.
  • Automation: Maintainers don't have to close issues manually after they close PRs. This will save them time and increase productivity.

Here is a resource about linking a PR to an issue.

Suggested solution

In the PR template:

  • Add a section — you can call it "Related Issues" or anything you want — to link the related issue.

  • In the section, add a comment out instructions to link the issue. Something like:

    Add the prefix "Closes", "Fixes", or "Resolves" followed by the related issue number. For example, "Closes #123"

Contribution: test

Describe the contribution

Tasks:
[ ] Task one description...
[ ] Task two description...
[ ] Add more tasks as needed...

Type of contribution

✨ Feature

Current behavior (if applicable)

No response

Suggested solution and implementation (if applicable)

Solutions:
[ ] Solution one description...
[ ] Solution two description...
[ ] Add more solutions as needed...

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Standardize Error Handling Across the Project

Issue

The current codebase exhibits inconsistent error handling practices. In various instances, errors are thrown using ThrowNew Error, ThrowError, or simply logged to the console with messages for the user. This inconsistency can lead to confusion and make the code harder to maintain and debug.

Expected Behavior

Error handling should be standardized across the entire project. We should decide on a consistent method for throwing errors and displaying messages to users.

Tasks

  • Review all instances of error handling in the codebase.
  • Replace ThrowNew Error and ThrowError with a standardized error-throwing mechanism.
  • Remove any user-facing error logs that are meant for debugging purposes.
  • Document the standard error handling procedure in the CONTRIBUTING guidelines.

Impact

Addressing this issue will improve code maintainability, provide clearer debugging messages during development, and enhance the user experience by providing consistent and informative error messages.

Seeking Contributors

Anyone who would like to contribute to this standardization effort is welcome. This is a good first issue for newcomers to the project who are familiar with best practices in error handling.

Contribution: Create Sanity Test Suite

Describe the contribution

Use command: "test:sanity": "node --experimental-vm-modules node_modules/.bin/jest --testNamePattern='\[sanity\]'",

Tasks:

  1. Configuration Check:

    • Verify that the project is correctly configured using isProjectConfigured.
    • Ensure the proper resolution of the post-install script path.
  2. Command-Line Interface (CLI):

    • Test that the CLI correctly parses the framework argument (e.g., cypress, playwright).
    • Validate that the CLI recognizes and handles the todays-report and monthly-summary commands.
    • Ensure that the CLI correctly processes optional flags (--csv, --duplicate).
    • Verify that the help message (--help) displays correctly.
  3. Google Sheets Configuration:

    • Verify that the script checks for the presence of Google Sheets configuration (GOOGLE_KEYFILE_PATH, GOOGLE_SHEET_ID).
    • Ensure the appropriate handling when Google Sheets configuration is missing (default to CSV).
  4. Post-Install Script Execution:

    • Test the execution of the post-install script if the project is not configured.
    • Validate the handling of errors during the execution of the post-install script.
  5. Daily Report Generation:

    • Verify that the handleDailyReport function is called with the correct options when the todays-report command is issued.
    • Ensure that errors during daily report generation are handled gracefully.
  6. Monthly Summary Generation:

    • Verify that the handleSummary function is called with the correct options when the monthly-summary command is issued.
    • Ensure that errors during monthly summary generation are handled gracefully.
    • Test that CSV output for monthly summaries is not supported and returns the correct error message.
  7. Main Function Execution:

    • Validate that the main function is called with the correct options when no specific command is issued.
    • Ensure that errors during the main function execution are handled gracefully.

Type of contribution

✨ Feature

Current behavior (if applicable)

No response

Suggested solution and implementation (if applicable)

No response

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Refactor Floating Variables into Methods for Procedural Consistency

Issue Summary

In our codebase, there are several instances where variables are declared and used in a floating manner, without being encapsulated within methods. This issue aims to refactor these floating variables, integrating them into well-defined methods to enhance the procedural structure and maintainability of our project.

Objective

  • Identify and refactor all floating variables in the codebase.
  • Encapsulate these variables within appropriate methods.
  • Ensure the procedural integrity of the code while enhancing readability and maintainability.

Details and Implementation Guidance

  • Conduct a thorough review of the current codebase to locate floating variables.
  • For each identified variable, determine the most logical method or procedural block where it should be encapsulated.
  • Refactor the code by defining new methods or integrating variables into existing methods, following the project's coding standards and procedural format.
  • Ensure that the refactoring does not alter the existing functionality and performance of the application.
  • Thoroughly test the refactored code to verify that all functionalities work as expected post-refactoring.

Contribution Instructions

  • Fork the repository and create a feature branch for this issue.
  • Refactor the code as described, adhering to the project's coding guidelines.
  • Write or update unit tests to cover the changes.
  • Document any significant changes or decisions made during the refactoring process.
  • Review your changes to ensure consistency with the overall project structure.
  • Submit a pull request linking to this issue for review.

Additional Notes

This refactor is crucial for improving the code quality and long-term maintainability of the project. It aligns with our goal to maintain a clean, efficient, and logically structured codebase. Contributors tackling this issue are encouraged to propose improvements or best practices that align with the project's procedural coding philosophy.

Comprehensive Review and Update of Project Documentation

Issue Summary

The goal of this issue is to thoroughly review our project's documentation to verify its accuracy and clarity. We aim to ensure that new users or contributors can easily set up and understand the project using our existing documentation.

Objective

  • Conduct a full review of the project's documentation, following the setup and usage instructions as a new user would.
  • Identify any inaccuracies, outdated information, or unclear instructions.
  • Update the documentation to address these issues, improving clarity and accuracy.

Details and Implementation Guidance

  • Go through the project's setup process step-by-step, strictly following the documentation.
  • Note any discrepancies between the documentation and the actual setup process, including missing steps, outdated information, or unclear instructions.
  • Pay special attention to parts of the documentation that might be confusing for someone new to the project.
  • Make the necessary updates to the documentation, ensuring that it is comprehensive, clear, and up-to-date.
  • If you find any areas that could benefit from additional context or explanations, consider enhancing the documentation with more detailed descriptions or examples.
  • After making updates, re-test the documentation to ensure that a new user can successfully set up and use the project without external help.

Contribution Instructions

  • Fork the repository and create a feature branch for this issue.
  • As you go through the documentation, make notes of any issues you encounter.
  • Update the documentation in your branch, addressing all identified issues.
  • Ensure that any changes or additions maintain consistency with the project's documentation style and format.
  • Submit a pull request with your changes, clearly outlining the problems you encountered and how your updates address them.
  • Link your pull request to this issue for review and discussion.

Additional Notes

This issue is crucial for maintaining the usability and accessibility of our project, especially for new users and contributors. Accurate and clear documentation is key to the growth and success of open-source projects. We encourage contributors to approach this task with a fresh perspective, as if encountering the project for the first time, to ensure that our documentation is as helpful and user-friendly as possible.

Optimize Location and Handling of User and Local Test Variables

Issue Summary

Our package currently has a specific method for locating and handling user variables and local test variables. This issue aims to reassess and potentially improve this method to enhance the efficiency, reliability, and maintainability of our codebase.

Objective

  • Evaluate the current approach used for locating and handling user variables and local test variables.
  • Identify areas for improvement or optimization.
  • Implement changes that enhance the efficiency and reliability of variable handling.

Details and Implementation Guidance

  • Conduct a detailed analysis of the current method used for locating and managing user and local test variables.
  • Identify any inefficiencies, potential bugs, or areas where the process can be streamlined or improved.
  • Consider alternative approaches or best practices that could be adopted to optimize variable handling.
  • Implement the proposed improvements, ensuring they align with the overall design and architecture of the package.
  • Thoroughly test the updated method to ensure it maintains or improves the current functionality and performance.
  • Document the changes made and any impact on the package's usage or behavior.

Contribution Instructions

  • Fork the repository and create a feature branch for this issue.
  • Analyze and refactor the variable handling method as described.
  • Update or add unit tests to cover the new or changed functionality.
  • Ensure your changes adhere to the project's coding standards and guidelines.
  • Summarize your changes and the rationale behind them in your pull request.
  • Link your pull request to this issue for review and discussion.

Additional Notes

This issue is critical for ensuring the robustness and efficiency of our package. The goal is to optimize how we handle key variables without compromising the package's functionality. Contributors should focus on solutions that offer tangible improvements in performance, readability, or maintainability. We encourage innovative approaches and welcome any suggestions or insights that could contribute to this objective.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.