-
-
Notifications
You must be signed in to change notification settings - Fork 2
v2.0.2 #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThis update introduces a new Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant run as run()
participant test as test()
participant testthat as testthat::test_dir()
participant FeatureFiles
participant Validator
participant Executor as execute()
User->>test: call test(filter)
test->>run: call run(path=".", filter)
run->>FeatureFiles: read and filter .feature files
FeatureFiles->>Validator: validate features (excluding docstrings/tables)
Validator-->>run: validation result
run->>Executor: execute(features, steps, parameters, hooks)
Executor-->>run: execution result
run-->>test: returns NULL invisibly
test->>testthat: run test_dir on temp test file calling run()
testthat-->>User: test results
Poem
Note ⚡️ AI Code Reviews for VS Code, Cursor, WindsurfCodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. Note ⚡️ Faster reviews with cachingCodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
tests/testthat/test-acceptance.R (1)
1-2: Consider making acceptance tests run outside of coverage as wellThis setup only runs acceptance tests when code coverage is being measured (
covr::in_covr()). While this helps with coverage reports, it means these tests won't run during normal development or CI pipelines unless they specifically run coverage.Consider either:
- Adding a comment explaining why tests only run during coverage analysis
- Using an environment variable instead so tests can be enabled in different contexts:
-testthat::skip_if_not(covr::in_covr()) +testthat::skip_if_not(covr::in_covr() || Sys.getenv("RUN_ACCEPTANCE_TESTS") == "true").github/workflows/test-acceptance.yaml (2)
8-8: Consider using a more descriptive workflow name.The current name "test-acceptance.yaml" appears to be the filename rather than a descriptive workflow name. A more appropriate name might be "Acceptance Tests" or "Run Acceptance Tests".
-name: test-acceptance.yaml +name: Acceptance Tests
25-25: Consider adding dependency caching to improve workflow speed.Adding caching parameters to the setup-r-dependencies action can speed up workflow runs by reusing cached dependencies.
- - uses: r-lib/actions/setup-r-dependencies@v2 + - uses: r-lib/actions/setup-r-dependencies@v2 + with: + cache: true + cache-version: 1tests/acceptance/setup-steps.R (2)
15-15: Add a comment explaining the purpose of the sleep.The hardcoded 3-second sleep has no explanation, making it unclear why it's necessary.
- Sys.sleep(3) + # Sleep to ensure file system operations complete before proceeding + Sys.sleep(3)
17-19: Consider capturing and logging output for debugging.Completely suppressing output makes debugging difficult when tests fail.
- withr::with_output_sink(nullfile(), { - context$result <- test(stop_on_failure = FALSE) - }) + # Capture output to a log file for debugging purposes + log_file <- file.path(context$tempdir, "cucumber_test.log") + withr::with_output_sink(log_file, { + context$result <- test(stop_on_failure = FALSE) + }) + # Store log path for inspection if tests fail + context$log_file <- log_fileR/detect.R (1)
16-18: Fix regex inconsistency in docstring detection.There's a subtle inconsistency in the regex pattern: the backticks pattern is missing the
$end anchor that's present in the quotes pattern.- str_detect(x, "^\\s*```|^\\s*\"\"\"$") + str_detect(x, "^\\s*```$|^\\s*\"\"\"$")This will ensure the function only matches lines containing only the triple backticks (plus whitespace), preventing false positives when backticks are followed by other content.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
.github/workflows/test-acceptance.yaml(1 hunks)DESCRIPTION(2 hunks)NEWS.md(1 hunks)R/detect.R(3 hunks)R/hooks.R(2 hunks)R/normalize_feature.R(2 hunks)R/parameter.R(2 hunks)R/parse_table.R(1 hunks)R/step.R(1 hunks)R/test.R(2 hunks)R/tokenize.R(1 hunks)R/validate.R(3 hunks)R/zzz.R(1 hunks)README.md(1 hunks)tests/acceptance/background.feature(1 hunks)tests/acceptance/setup-steps.R(1 hunks)tests/testthat/_snaps/local/examples.md(8 hunks)tests/testthat/test-acceptance.R(1 hunks)tests/testthat/test-normalize_feature.R(1 hunks)tests/testthat/test-parameter.R(1 hunks)tests/testthat/test-tokenize.R(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: ubuntu-latest (devel)
🔇 Additional comments (46)
DESCRIPTION (2)
4-4: Version bump looks appropriate.The increment from 2.0.1 to 2.0.2 correctly follows semantic versioning for the bug fixes and minor enhancements described in the PR.
38-39: Dependency addition is well-documented.Adding
pkgloadto the Suggests field is appropriate as it's likely used for the new acceptance tests without introducing a hard dependency.README.md (2)
7-7: Good addition of CI status badge.The new acceptance test workflow badge provides visibility into the test status and maintains consistency with your other GitHub Actions badges. The custom styling with cucumber colors is a nice touch.
13-13: Simplified package description.The more concise description focuses on what the package is rather than implementation details, which is a good improvement for readability.
NEWS.md (1)
1-5: Clear and well-organized release notes.The NEWS.md entries for version 2.0.2 effectively communicate the changes made in this release, following the established format with appropriate emoji categorization. The bug fixes for docstring and table handling are well-explained, as is the addition of acceptance tests.
tests/testthat/test-tokenize.R (1)
533-536: Test expectation updated to match new behavior.The test now correctly expects the preservation of leading whitespace in data table entries, which aligns with the change to stop normalizing text within tables.
R/tokenize.R (1)
52-56: Function logic changed to preserve leading whitespaceThe change to return
xdirectly instead of removing leading whitespace is an important adjustment to preserve the original formatting in the data. This change aligns with the broader PR updates for handling docstrings and tables that require preserving their original indentation and formatting.R/test.R (1)
113-123: Improved return value handlingThe modifications to capture and return the test results invisibly improve the function's usability programmatically while preserving the original invisible behavior. This change is valuable for integrating with the GitHub Actions workflow and for programmatic access to test results.
tests/testthat/_snaps/local/examples.md (7)
136-136: Improved backtrace readabilityRemoving file and line references from the backtrace makes snapshots more stable and readable. This is appropriate for test snapshots.
153-153: Improved backtrace readabilitySimilar to the earlier backtrace change, removing file and line references makes the output cleaner and more stable.
217-217: Improved backtrace readabilityStandardizing the backtrace format to exclude file and line references makes the snapshots more maintainable.
226-226: Improved backtrace readabilityConsistent with earlier changes, removing file and line references produces cleaner snapshot output.
250-250: Improved backtrace readabilityConsistent cleanup of backtrace output for better test snapshot stability.
259-259: Improved backtrace readabilityRemoving file and line references maintains consistency throughout the snapshot.
268-268: Improved backtrace readabilityFinal instance of backtrace formatting improvement for better snapshot stability.
R/parse_table.R (1)
8-8: Consistent whitespace handlingGood addition to remove leading whitespace from table lines before parsing. This ensures consistent parsing regardless of the indentation level in the original feature file, which is important since the overall PR is refining how tables and docstrings are processed.
R/normalize_feature.R (2)
3-6: Good implementation of selective normalizationThe added code correctly leverages the
special_mask()function to skip keyword normalization for special content like docstrings and tables. This preserves the original content in these sections while still normalizing regular lines.
19-20: Explicit return improves readabilityAdding an explicit return statement makes the function's behavior clearer.
R/zzz.R (1)
2-6: Good centralization of option namesCentralizing option names in
.onLoad()is a good practice that improves maintainability and consistency. This approach makes it easier to modify option names in the future if needed.tests/testthat/test-normalize_feature.R (2)
78-112: Well-structured test for docstring handlingThis test correctly verifies that normalization respects docstring boundaries by:
- Setting up a feature with steps containing a docstring with keywords
- Confirming that keywords outside the docstring are normalized to "Step"
- Verifying that content inside the docstring remains unchanged
The test follows the AAA (Arrange-Act-Assert) pattern and clearly demonstrates the expected behavior.
114-144: Thorough table handling verificationThis test effectively validates that table content is preserved during normalization. The test covers both the preservation of table data and the correct normalization of surrounding steps.
tests/testthat/test-parameter.R (4)
4-5: LGTM! Good refactoring to use configurable option keys.The change from hardcoded option names to dynamic option keys via
getOption(".cucumber_parameters_option")improves configurability and modularity.
8-8: LGTM! Good use of abstraction.Nice improvement to use
get_parameters()helper function instead of directly accessing options. This encapsulates the implementation details and makes the code more maintainable.
15-16: LGTM! Consistent implementation.The changes maintain consistency with the previous refactoring, using the same pattern for dynamic option keys.
20-20: LGTM! Consistent helper function usage.Consistent use of the
get_parameters()helper function maintains code clarity and encapsulation.R/validate.R (3)
6-6: LGTM! Good filtering of special content in validation.The addition of
special_mask()filtering is a good improvement, allowing special content like docstrings and tables to be excluded from validation rules that shouldn't apply to them.
20-20: LGTM! Consistent implementation across validation functions.The same filtering pattern has been consistently applied across all validation functions, which is good for maintainability and understanding.
36-36: LGTM! Consistent implementation of special content filtering.The consistent application of
special_mask()filtering to all validation functions ensures uniform handling of special content like docstrings and tables.R/step.R (3)
145-147: Refactored step registration to use dynamic option key.The implementation now uses a configurable option key instead of a hardcoded string. This makes the storage location of steps configurable and consistent with the pattern used for hooks and parameters in the package.
152-152: Updated clear_steps() to use the same dynamic option approach.This change is consistent with the pattern applied to register_step(), ensuring the configuration key is retrieved from options.
156-156: Updated get_steps() to use the dynamic option key.The implementation now uses getOption(".cucumber_steps_option") to retrieve the key name, making the retrieval mechanism consistent with the storage approach.
R/hooks.R (5)
79-79: Refactored hook retrieval to use dynamic option key.The hooks retrieval now uses a configurable option key instead of a hardcoded string, making it more flexible.
87-87: Updated hook registration to use dynamic option setting.This change aligns with the pattern used in step.R, using rlang's list2 and := operators for dynamic option setting.
96-96: Added missing import for rlang::list2.This ensures the function is properly imported for use in the clear_hooks() function.
98-98: Updated clear_hooks() to use the dynamic option approach.Consistent implementation with the pattern used in register_hook().
102-102: Updated get_hooks() to use the dynamic option key.This completes the refactoring of hooks-related functions to use the configurable option key.
R/parameter.R (4)
72-75: Refactored parameter retrieval to use dynamic option key.The implementation now uses a configurable option key from getOption(".cucumber_parameters_option") rather than the hardcoded "parameters" string, making it more flexible and consistent with the approach used for steps and hooks.
84-84: Added explicit imports for rlang functions.The import of rlang::list2 and rlang::
:=provides the necessary functions for dynamic option setting in set_parameters().
86-86: Updated set_parameters() to use dynamic option setting.This change aligns with the pattern used in hooks.R and step.R, using rlang's list2 and := operators for dynamic option setting.
91-91: Refactored default parameters setup.Now using the set_parameters() function instead of directly setting options, which provides better consistency and encapsulation.
tests/acceptance/background.feature (4)
1-7: Added comprehensive acceptance test feature for Background functionality.This feature file properly explains the Background concept in Cucumber and its execution order (after hooks but before scenarios).
8-35: First scenario tests background with a single scenario.This scenario verifies that background steps run before scenario steps and the steps counter increments correctly. It dynamically creates test files and runs cucumber to validate the behavior.
36-71: Second scenario tests background with multiple scenarios.This scenario ensures that the background step runs before each scenario, not just once for the entire feature. It confirms that step counting works as expected across multiple scenarios with a shared background.
54-54:❓ Verification inconclusive
Consistent issue with step counter initialization in all scenarios.
All the step definitions use the same pattern
context$steps <- context$steps %||% 1 + 1which might not initialize the counter as expected. Consider clarifying the initialization logic or using more explicit incrementation.If the test passes as shown, please verify that the step counter works as expected:
Also applies to: 58-58, 62-62
🏁 Script executed:
#!/bin/bash # Run the acceptance test to see if it passes with the current implementation rg -A 5 "steps <- context\\$steps %\\|\\|% 1 \\+ 1"Length of output: 157
Clarify step counter initialization logic
All occurrences of the step counter in tests/acceptance/background.feature are using
context$steps <- context$steps %||% 1 + 1which, due to R’s infix‐operator precedence, is parsed as
context$steps <- (context$steps %||% 1) + 1– so on the first invocation you default to 2, not 1.
Locations to review:
- tests/acceptance/background.feature: lines 54, 58, 62
Suggested fixes:
- If you intend to start counting at 1, use
context$steps <- (context$steps %||% 0) + 1- Or explicitly group the existing default:
context$steps <- (context$steps %||% 1) + 1- Alternatively, initialize once with
context$steps <- 0before any steps, then simply do:context$steps <- context$steps + 1Please verify locally that each scenario produces the expected step values and update the initialization accordingly.
R/detect.R (2)
2-4: Well-structured and focused table detection function.This new utility function identifies single lines that are table lines, which is a good clean separation from the existing
detect_tablethat checks entire blocks.
34-59: Clean implementation of state tracking for special content.The
special_maskfunction elegantly tracks docstring state using an environment and properly identifies both table lines and docstring content. The approach of toggling state at boundaries is appropriate for this use case.Two considerations for potential edge cases:
- This implementation doesn't handle nested docstrings, which might be fine if they're not expected in your feature files
- Mixed delimiter types (starting with ``` and ending with """) would be treated as separate docstrings
If these edge cases are valid concerns for your use case, additional validation might be needed.
| - name: Test acceptance | ||
| run: | | ||
| pkgload::load_all() | ||
| test() | ||
| shell: Rscript {0} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling around test execution.
The script execution doesn't include any explicit error handling. Consider checking if pkgload is installed before loading and adding more robust error handling.
- name: Test acceptance
run: |
+ if (!requireNamespace("pkgload", quietly = TRUE)) {
+ install.packages("pkgload")
+ }
pkgload::load_all()
- test()
+ results <- tryCatch(
+ test(),
+ error = function(e) {
+ message("Error running tests: ", e$message)
+ quit(status = 1)
+ }
+ )
+ if (!is.null(results) && sum(as.data.frame(results)$failed) > 0) {
+ message("Tests failed")
+ quit(status = 1)
+ }
shell: Rscript {0}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Test acceptance | |
| run: | | |
| pkgload::load_all() | |
| test() | |
| shell: Rscript {0} | |
| - name: Test acceptance | |
| run: | | |
| if (!requireNamespace("pkgload", quietly = TRUE)) { | |
| install.packages("pkgload") | |
| } | |
| pkgload::load_all() | |
| results <- tryCatch( | |
| test(), | |
| error = function(e) { | |
| message("Error running tests: ", e$message) | |
| quit(status = 1) | |
| } | |
| ) | |
| if (!is.null(results) && sum(as.data.frame(results)$failed) > 0) { | |
| message("Tests failed") | |
| quit(status = 1) | |
| } | |
| shell: Rscript {0} |
tests/acceptance/background.feature
Outdated
| And a file named "tests/acceptance/setup-steps.R" with | ||
| """ | ||
| given("a background step", function(context) { | ||
| context$steps <- context$steps %||% 1 + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Potential issue with context initialization using %||% operator.
In the step definition, context$steps <- context$steps %||% 1 + 1 might not work as intended. The %||% operator returns the left-hand side if it's not NULL, otherwise the right-hand side. The expression 1 + 1 will be evaluated to 2, so it's effectively context$steps %||% 2.
If context$steps is initially NULL, it will be set to 2, not 1. Is this the intended behavior? The test expects incremental counting.
🏁 Script executed:
#!/bin/bash
# Check if the %||% operator is defined in the package or if it's from another package
rg -A 1 "%\\|\\|%" --type RLength of output: 53
🏁 Script executed:
#!/bin/bash
# Search for all occurrences of the %||% operator in the repo
rg '%\|\|%' -n
# Show the context around where context$steps is used in the feature test
rg 'context\$steps' -n -C3 tests/acceptance/background.featureLength of output: 1607
Correct step count initialization in acceptance tests
Due to R’s operator‐precedence rules,
context$steps <- context$steps %||% 1 + 1is parsed as
context$steps <- context$steps %||% (1 + 1)so on the very first call you get 2, and on subsequent calls you never increment. To start at 1 and add one on each invocation, use a default of 0 and wrap the entire default-plus-one in parentheses:
• File: tests/acceptance/background.feature (inside the """ block that writes tests/acceptance/setup-steps.R)
– Lines: 22, 26, 54, 58, 62
Replace:
- context$steps <- context$steps %||% 1 + 1
+ context$steps <- (context$steps %||% 0) + 1This ensures the first step yields 1 and each subsequent step increments by 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
.github/workflows/test-mutation.yml (2)
7-7: Use a descriptive workflow name
Thenamefield should convey the purpose of this workflow rather than mirror the filename.-name: test-mutation.yml +name: Mutation testing
23-26: Enable caching for R dependencies
To speed up repeated runs, activate caching in the dependencies setup step:- uses: r-lib/actions/setup-r-dependencies@v2 + with: + cache-dependencies: true dependencies: true extra-packages: - any::jsonlite - any::pkgload - any::muttest
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/workflows/test-mutation.yml(1 hunks)DESCRIPTION(2 hunks)R/detect.R(3 hunks)README.md(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- DESCRIPTION
- README.md
- R/detect.R
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test-mutation.yml
15-15: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: ubuntu-latest (devel)
- GitHub Check: test-coverage
- GitHub Check: ubuntu-latest (devel)
🔇 Additional comments (1)
.github/workflows/test-mutation.yml (1)
15-17: Skip updatingactions/checkout@v3
The static analysis hint flagged upgrading the checkout action, butactions/checkout@v3is the current stable release. No change needed here.🧰 Tools
🪛 actionlint (1.7.4)
15-15: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
| permissions: write-all | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Restrict workflow permissions to the minimum required
Granting write-all is overly broad. Since this job only pushes badge files, limit it to repository contents write access:
-permissions: write-all
+permissions:
+ contents: write📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| permissions: write-all | |
| permissions: | |
| contents: write |
| - uses: r-lib/actions/setup-r-dependencies@v2 | ||
| with: | ||
| extra-packages: any::jsonlite | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Install required R packages for mutation testing
The script uses pkgload::load_all(), test_plan(), and test(), but pkgload and muttest aren’t installed by default. Update the dependencies step to install them:
- - uses: r-lib/actions/setup-r-dependencies@v2
- with:
- extra-packages: any::jsonlite
+ - uses: r-lib/actions/setup-r-dependencies@v2
+ with:
+ dependencies: true
+ extra-packages:
+ - any::jsonlite
+ - any::pkgload
+ - any::muttest📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - uses: r-lib/actions/setup-r-dependencies@v2 | |
| with: | |
| extra-packages: any::jsonlite | |
| - uses: r-lib/actions/setup-r-dependencies@v2 | |
| with: | |
| dependencies: true | |
| extra-packages: | |
| - any::jsonlite | |
| - any::pkgload | |
| - any::muttest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
.github/workflows/test-mutation.yml (2)
8-8: Restrict workflow permissions to the minimum required
Grantingwrite-allis overly broad for this workflow. Since it only pushes badge files, limit it to repository contents write access:-permissions: write-all +permissions: + contents: write
23-26:⚠️ Potential issueInstall required R packages for mutation testing
This workflow invokespkgload::load_all()andmuttest::test_plan(), but only installsjsonlite. Withoutpkgloadandmuttest, the R script will fail. Update the dependencies step:- - uses: r-lib/actions/setup-r-dependencies@v2 - with: - extra-packages: any::jsonlite + - uses: r-lib/actions/setup-r-dependencies@v2 + with: + dependencies: true + extra-packages: + - any::jsonlite + - any::pkgload + - any::muttest
🧹 Nitpick comments (2)
.github/workflows/test-mutation.yml (2)
7-7: Use a descriptive workflow name
Naming the workflow after the file can make it less discoverable in the Actions UI. Consider renaming it to something more human-friendly, for example:- name: test-mutation.yml + name: Mutation Testing and Badge Generation
81-102: Simplify badge-branch management with a dedicated Action
The custom bash logic for creating, checking out, and force-pushing thebadgesbranch is error-prone and verbose. Consider using an established GitHub Action (e.g.,peter-evans/create-or-update-branchorpeaceiris/actions-gh-pages) to streamline branch creation and updates.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/test-mutation.yml(1 hunks)DESCRIPTION(2 hunks)README.md(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- DESCRIPTION
- README.md
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test-mutation.yml
15-15: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: ubuntu-latest (oldrel-1)
- GitHub Check: test-coverage
- GitHub Check: ubuntu-latest (oldrel-1)
- GitHub Check: generate-badge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
.github/workflows/test-mutation.yml (2)
8-8: Restrict workflow permissions to the minimum required
Grantingwrite-allis overly broad. Since this job only pushes badge files, limit it to repository contents write access:-permissions: write-all +permissions: + contents: write
23-26:⚠️ Potential issueInstall all required R packages for mutation testing
The mutation testing script invokespkgload::load_all(),muttest::test_plan(), andmuttest::test(), but onlyjsonliteis installed. The workflow will fail unlesspkgloadandmuttestare available. Update the dependencies step accordingly:- - uses: r-lib/actions/setup-r-dependencies@v2 - with: - extra-packages: any::jsonlite + - uses: r-lib/actions/setup-r-dependencies@v2 + with: + dependencies: true + extra-packages: + - any::jsonlite + - any::pkgload + - any::muttest
🧹 Nitpick comments (2)
.github/workflows/test-mutation.yml (2)
15-16: Update checkout action to the latest version
The runner ofactions/checkout@v3may lag behind the latest GitHub Actions improvements and security fixes. Upgrade tov4:- - uses: actions/checkout@v3 + - uses: actions/checkout@v4🧰 Tools
🪛 actionlint (1.7.4)
15-15: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
7-7: Improve the workflow name for readability
Workflow names appear in the GitHub Actions UI; consider renaming fromtest-mutation.ymlto a human-friendly title such asMutation Testingfor better clarity.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/test-mutation.yml(1 hunks)DESCRIPTION(2 hunks)README.md(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- DESCRIPTION
- README.md
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test-mutation.yml
15-15: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: test-coverage
- GitHub Check: ubuntu-latest (release)
- GitHub Check: generate-badge
🔇 Additional comments (1)
.github/workflows/test-mutation.yml (1)
56-56: Skip loadingmuttestvialibrary()
Since you callmuttest::test_plan()andmuttest::test()with the::qualifier, R will load the namespace automatically. Explicitly callinglibrary(muttest)is not necessary here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (1)
tests/acceptance/box_project.feature (1)
40-91: Comprehensive test for multiple feature files with boxThis scenario extends the box integration testing to cover multiple feature files and multiple exported functions. The import of specific functions in line 70 demonstrates good practices for selective imports.
There's an unnecessary empty line at line 76 between step definition functions. While not problematic, it's inconsistent with the rest of the file.
- - +
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (19)
DESCRIPTION(2 hunks)NAMESPACE(1 hunks)NEWS.md(1 hunks)R/detect.R(3 hunks)R/execute.R(1 hunks)R/test.R(4 hunks)man/run.Rd(1 hunks)man/test.Rd(1 hunks)pkgdown/_pkgdown.yml(1 hunks)tests/acceptance/box_project.feature(1 hunks)tests/acceptance/custom_loading_project.feature(1 hunks)tests/acceptance/package_test_with_cucumber.feature(1 hunks)tests/acceptance/package_test_with_testthat.feature(1 hunks)tests/acceptance/setup-steps.R(1 hunks)tests/testthat/_snaps/local/examples.md(9 hunks)tests/testthat/test-acceptance.R(1 hunks)tests/testthat/test-examples.R(1 hunks)tests/testthat/test-execute.R(2 hunks)vignettes/bdd.qmd(0 hunks)
💤 Files with no reviewable changes (1)
- vignettes/bdd.qmd
✅ Files skipped from review due to trivial changes (5)
- pkgdown/_pkgdown.yml
- NAMESPACE
- tests/testthat/test-acceptance.R
- man/test.Rd
- man/run.Rd
🚧 Files skipped from review as they are similar to previous changes (5)
- DESCRIPTION
- NEWS.md
- tests/testthat/_snaps/local/examples.md
- R/detect.R
- tests/acceptance/setup-steps.R
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: ubuntu-latest (oldrel-1)
- GitHub Check: ubuntu-latest (devel)
- GitHub Check: generate-badge
🔇 Additional comments (19)
tests/testthat/test-execute.R (1)
39-39: Function name update looks goodThe function calls have been correctly updated from
runtoexecuteto match the renamed function in R/execute.R.Also applies to: 93-93
R/execute.R (1)
3-3: Good input validation improvementsThe function renaming from
runtoexecuteand the added input validation improve the robustness of the code. The validation ensures that required parameters are present and of the correct type before proceeding with execution.Also applies to: 9-11
tests/acceptance/custom_loading_project.feature (1)
1-37: Well-structured acceptance test for manual source loadingThis new feature file clearly demonstrates how to run tests for a project using manual source loading. The scenario covers creating test files, source code, and step definitions, then manually loading them into a custom environment before running the tests.
tests/acceptance/package_test_with_testthat.feature (6)
152-190: Appropriately tests source code accessibilityThis scenario demonstrates an important feature - the ability for step definitions to access functions defined in the package source code. This is essential for effective integration testing.
191-230: Good test for setup code availabilityThis scenario verifies that step definitions can access functions defined in testthat setup files, which is important for test organization and reuse. The sequence and assertions are appropriate.
231-269: Correctly tests helper code availabilityThis scenario properly verifies the integration with testthat's helper file loading mechanism, ensuring that functions defined in helper files are accessible from step definitions.
270-310: Tests custom step organization in subdirectoriesThis scenario covers an important flexibility feature - the ability to organize step definitions in custom subdirectories. The use of explicit environment creation and sourcing in lines 305-307 is a good approach to demonstrate this capability.
311-355: Verifies testthat compatibilityThis final scenario appropriately tests that Cucumber feature files can coexist with standard testthat tests, which is crucial for gradual adoption. The assertion in line 354 confirms that both the feature test and the standard test are executed.
126-151: 🛠️ Refactor suggestionGood error handling test for missing feature files
This scenario correctly tests the error handling when no feature files are found, ensuring that users get appropriate feedback. The use of
stop_on_failure = FALSEin line 148 is important to allow the test to continue despite the expected error.The operator precedence issue appears again in line 135.
- context$steps <- context$steps %||% 0 + 1 + context$steps <- (context$steps %||% 0) + 1Likely an incorrect or invalid review comment.
tests/acceptance/package_test_with_cucumber.feature (5)
136-170: Good test for source code integrationThis scenario verifies that step definitions can access package source code, which is essential for effective integration testing. The test structure and assertions are appropriate.
171-206: Setup code availability is properly testedThis scenario confirms that functions defined in setup files are accessible from step definitions when using the
features/directory structure. The test organization is clear and effective.
207-241: Helper code integration test is well structuredThis scenario verifies the integration with helper files when using the
features/directory, ensuring that functions defined in helper files are accessible from step definitions.
114-135: 🛠️ Refactor suggestionError handling for missing features is well tested
This scenario appropriately tests error handling when no feature files are found in the
features/directory. Usingstop_on_failure = FALSEon line 132 is the correct approach to test this error condition.The operator precedence issue appears again in line 123:
- context$steps <- context$steps %||% 0 + 1 + context$steps <- (context$steps %||% 0) + 1Likely an incorrect or invalid review comment.
72-113: 🛠️ Refactor suggestionFilter functionality is correctly tested
This scenario verifies the ability to selectively run feature files using the filter parameter, which is important for targeted testing. The assertion on line 112 is crucial for confirming that only the filtered feature was executed.
The operator precedence issue appears again in lines 96 and 100:
- context$steps <- context$steps %||% 0 + 1 + context$steps <- (context$steps %||% 0) + 1Likely an incorrect or invalid review comment.
tests/acceptance/box_project.feature (1)
1-39: Well-designed test for box package integrationThis scenario effectively demonstrates the integration with the
boxpackage, which is valuable for users who organize their code in modules. The feature file includes a clear explanation of the purpose in line 3.The explicit call to
box::purge_cache()in line 35 is a good practice to ensure tests start with a clean state.R/test.R (4)
1-46: Good implementation of the run() function with clear documentationThe new exported
run()function is well-documented with clear examples and parameter descriptions. The implementation correctly handles feature discovery, filtering, validation, and execution.I appreciate the clear error handling in lines 36-38 that provides a helpful message when no feature files are found.
74-87: Updated test_cucumber_code() function with proper quotingThe
test_cucumber_code()function has been correctly updated to generate code that calls the newrun()function instead of the removedrun_features(). The use ofshQuote()in lines 79 and 83 ensures proper handling of paths with spaces or special characters.
91-105: Clear documentation updates for test() functionThe updated documentation for the
test()function properly references the newrun()function and provides clear guidance on how to use them together. The recommendation in line 104 about usingcucumber::run()for integration with testthat is particularly helpful.
148-162: Simplified test() implementation with proper result handlingThe updated implementation of
test()correctly uses the newtest_cucumber_code()function to generate code that callsrun(). The change to return the result invisibly in line 161 is a good practice that allows for programmatic inspection of test results.
| it("should throw an error if no test files are found", { | ||
| expect_error( | ||
| test_example( | ||
| "with_testthat_filtering", | ||
| filter = "this_feature_doesnt_exist" | ||
| ), | ||
| "No feature files found" | ||
| test_example( | ||
| "with_testthat_filtering", | ||
| filter = "this_feature_doesnt_exist" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test title doesn't match implementation
The test title states "should throw an error if no test files are found", but the implementation no longer expects an error. This creates a disconnect between what the test says it does and what it actually does.
Consider either:
- Updating the test title to match the new behavior:
- it("should throw an error if no test files are found", {
+ it("should handle case when no test files are found", {- Or if the function should still throw an error, restore the
expect_error()wrapper:
it("should throw an error if no test files are found", {
- test_example(
- "with_testthat_filtering",
- filter = "this_feature_doesnt_exist"
- )
+ expect_error(
+ test_example(
+ "with_testthat_filtering",
+ filter = "this_feature_doesnt_exist"
+ ),
+ "No feature files found" # or appropriate error message
+ )
})🤖 Prompt for AI Agents
In tests/testthat/test-examples.R around lines 129 to 133, the test title claims
it should throw an error if no test files are found, but the current
implementation does not expect an error. To fix this, either update the test
title to accurately describe the current behavior without expecting an error, or
revert the implementation to wrap the test_example call inside expect_error() if
the function is supposed to throw an error.
| Feature: Testing a package with cucumber tests in tests/testthat/ directory | ||
| Scenario: One feature file | ||
| Given a file named "DESCRIPTION" with | ||
| """ | ||
| Package: example | ||
| Version: 0.1.0 | ||
| """ | ||
| And a file named "tests/testthat/package.feature" with | ||
| """ | ||
| Feature: A package | ||
| Scenario: A scenario | ||
| Given a scenario step | ||
| Then it has 1 steps | ||
| """ | ||
| And a file named "tests/testthat/setup-steps.R" with | ||
| """ | ||
| given("a scenario step", function(context) { | ||
| context$steps <- context$steps %||% 0 + 1 | ||
| }) | ||
| then("it has {int} steps", function(n, context) { | ||
| expect_equal(context$steps, n) | ||
| }) | ||
| """ | ||
| And a file named "tests/testthat/test-acceptance.R" with | ||
| """ | ||
| run() | ||
| """ | ||
| When I run | ||
| """ | ||
| testthat::test_local() | ||
| """ | ||
| Then it passes | ||
|
|
||
| Scenario: Multiple feature files |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Well-structured integration test for testthat directory feature
This first scenario effectively tests the integration of Cucumber with testthat in a package context. The test covers the basic functionality of running a single feature file in the tests/testthat/ directory, which is an important use case for R package developers.
The step definition in line 18 context$steps <- context$steps %||% 0 + 1 has a potential issue though - the operator precedence may not work as intended. The %||% operator will be evaluated first, then the addition. This might lead to incorrect step counting.
- context$steps <- context$steps %||% 0 + 1
+ context$steps <- (context$steps %||% 0) + 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Feature: Testing a package with cucumber tests in tests/testthat/ directory | |
| Scenario: One feature file | |
| Given a file named "DESCRIPTION" with | |
| """ | |
| Package: example | |
| Version: 0.1.0 | |
| """ | |
| And a file named "tests/testthat/package.feature" with | |
| """ | |
| Feature: A package | |
| Scenario: A scenario | |
| Given a scenario step | |
| Then it has 1 steps | |
| """ | |
| And a file named "tests/testthat/setup-steps.R" with | |
| """ | |
| given("a scenario step", function(context) { | |
| context$steps <- context$steps %||% 0 + 1 | |
| }) | |
| then("it has {int} steps", function(n, context) { | |
| expect_equal(context$steps, n) | |
| }) | |
| """ | |
| And a file named "tests/testthat/test-acceptance.R" with | |
| """ | |
| run() | |
| """ | |
| When I run | |
| """ | |
| testthat::test_local() | |
| """ | |
| Then it passes | |
| Scenario: Multiple feature files | |
| given("a scenario step", function(context) { | |
| context$steps <- (context$steps %||% 0) + 1 | |
| }) |
🤖 Prompt for AI Agents
In tests/acceptance/package_test_with_testthat.feature around lines 15 to 20,
the step definition uses the expression context$steps <- context$steps %||% 0 +
1, which due to operator precedence adds 1 after the %||% operation, potentially
causing incorrect step counting. To fix this, add parentheses around
(context$steps %||% 0) so that the null-coalescing operator is evaluated first,
then 1 is added, ensuring correct increment of context$steps.
| Scenario: Multiple feature files with a filter | ||
| Given a file named "DESCRIPTION" with | ||
| """ | ||
| Package: example | ||
| Version: 0.1.0 | ||
| """ | ||
| And a file named "tests/testthat/test1.feature" with | ||
| """ | ||
| Feature: Feature 1 | ||
| Scenario: a scenario | ||
| Given a scenario step | ||
| Given a scenario step | ||
| Then it has 2 steps | ||
| """ | ||
| And a file named "tests/testthat/test2.feature" with | ||
| """ | ||
| Feature: Feature 2 | ||
| Scenario: another scenario | ||
| Given another scenario step | ||
| Then it has 1 steps | ||
| """ | ||
| And a file named "tests/testthat/setup-steps.R" with | ||
| """ | ||
| given("a scenario step", function(context) { | ||
| context$steps <- context$steps %||% 0 + 1 | ||
| }) | ||
| given("another scenario step", function(context) { | ||
| context$steps <- context$steps %||% 0 + 1 | ||
| }) | ||
| then("it has {int} steps", function(n, context) { | ||
| expect_equal(context$steps, n) | ||
| }) | ||
| """ | ||
| And a file named "tests/testthat/test-acceptance.R" with | ||
| """ | ||
| run(filter = "test1") | ||
| """ | ||
| When I run | ||
| """ | ||
| testthat::test_local() | ||
| """ | ||
| Then it passes | ||
| Then only "Feature 1" was run | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Filtering functionality is well tested
This scenario is important as it verifies that users can filter which feature files to run, which is essential for large test suites. I like that line 124 includes a specific assertion to verify only the filtered feature was run.
The operator precedence issue appears again in line 104:
- context$steps <- context$steps %||% 0 + 1
+ context$steps <- (context$steps %||% 0) + 1🤖 Prompt for AI Agents
In tests/acceptance/package_test_with_testthat.feature between lines 80 and 125,
the operator precedence in the step definitions within setup-steps.R is
incorrect, causing the context$steps increment to not work as intended. Fix this
by properly grouping the expression so that the default value 0 is used when
context$steps is NULL before adding 1, ensuring the increment logic works
correctly.
| Scenario: Multiple feature files | ||
| Given a file named "DESCRIPTION" with | ||
| """ | ||
| Package: example | ||
| Version: 0.1.0 | ||
| """ | ||
| And a file named "tests/testthat/test1.feature" with | ||
| """ | ||
| Feature: a feature | ||
| Scenario: a scenario | ||
| Given a scenario step | ||
| Given a scenario step | ||
| Then it has 2 steps | ||
| """ | ||
| And a file named "tests/testthat/test2.feature" with | ||
| """ | ||
| Feature: another feature | ||
| Scenario: another scenario | ||
| Given another scenario step | ||
| Then it has 1 steps | ||
| """ | ||
| And a file named "tests/testthat/setup-steps.R" with | ||
| """ | ||
| given("a scenario step", function(context) { | ||
| context$steps <- context$steps %||% 0 + 1 | ||
| }) | ||
| given("another scenario step", function(context) { | ||
| context$steps <- context$steps %||% 0 + 1 | ||
| }) | ||
| then("it has {int} steps", function(n, context) { | ||
| expect_equal(context$steps, n) | ||
| }) | ||
| """ | ||
| And a file named "tests/testthat/test-acceptance.R" with | ||
| """ | ||
| run() | ||
| """ | ||
| When I run | ||
| """ | ||
| testthat::test_local() | ||
| """ | ||
| Then it passes | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Comprehensive test for multiple feature file support
This scenario appropriately tests the ability to run multiple feature files in the same test environment, ensuring that context isolation works correctly between different feature files. The test setup and assertions are well-structured.
The same operator precedence issue exists in line 59 and should be fixed:
- context$steps <- context$steps %||% 0 + 1
+ context$steps <- (context$steps %||% 0) + 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Scenario: Multiple feature files | |
| Given a file named "DESCRIPTION" with | |
| """ | |
| Package: example | |
| Version: 0.1.0 | |
| """ | |
| And a file named "tests/testthat/test1.feature" with | |
| """ | |
| Feature: a feature | |
| Scenario: a scenario | |
| Given a scenario step | |
| Given a scenario step | |
| Then it has 2 steps | |
| """ | |
| And a file named "tests/testthat/test2.feature" with | |
| """ | |
| Feature: another feature | |
| Scenario: another scenario | |
| Given another scenario step | |
| Then it has 1 steps | |
| """ | |
| And a file named "tests/testthat/setup-steps.R" with | |
| """ | |
| given("a scenario step", function(context) { | |
| context$steps <- context$steps %||% 0 + 1 | |
| }) | |
| given("another scenario step", function(context) { | |
| context$steps <- context$steps %||% 0 + 1 | |
| }) | |
| then("it has {int} steps", function(n, context) { | |
| expect_equal(context$steps, n) | |
| }) | |
| """ | |
| And a file named "tests/testthat/test-acceptance.R" with | |
| """ | |
| run() | |
| """ | |
| When I run | |
| """ | |
| testthat::test_local() | |
| """ | |
| Then it passes | |
| And a file named "tests/testthat/setup-steps.R" with | |
| """ | |
| given("a scenario step", function(context) { | |
| - context$steps <- context$steps %||% 0 + 1 | |
| + context$steps <- (context$steps %||% 0) + 1 | |
| }) | |
| given("another scenario step", function(context) { | |
| context$steps <- context$steps %||% 0 + 1 | |
| }) | |
| then("it has {int} steps", function(n, context) { | |
| expect_equal(context$steps, n) | |
| }) | |
| """ |
🤖 Prompt for AI Agents
In tests/acceptance/package_test_with_testthat.feature around lines 35 to 79,
the test step definitions use the expression 'context$steps <- context$steps
%||% 0 + 1' which has incorrect operator precedence causing unintended results.
To fix this, add parentheses around 'context$steps %||% 0' so the increment
applies correctly, changing it to 'context$steps <- (context$steps %||% 0) + 1'
in both step definitions on line 59.
| Feature: Testing a package with cucumber tests in features/ directory | ||
| Scenario: One feature file | ||
| Given a file named "DESCRIPTION" with | ||
| """ | ||
| Package: example | ||
| Version: 0.1.0 | ||
| """ | ||
| And a file named "features/package.feature" with | ||
| """ | ||
| Feature: A package | ||
| Scenario: A scenario | ||
| Given a scenario step | ||
| Then it has 1 steps | ||
| """ | ||
| And a file named "features/setup-steps.R" with | ||
| """ | ||
| given("a scenario step", function(context) { | ||
| context$steps <- context$steps %||% 0 + 1 | ||
| }) | ||
| then("it has {int} steps", function(n, context) { | ||
| expect_equal(context$steps, n) | ||
| }) | ||
| """ | ||
| When I run | ||
| """ | ||
| test("features") | ||
| """ | ||
| Then it passes | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Good basic test for features directory integration
This scenario effectively tests the primary use case of running a single feature file from a dedicated features/ directory. The test structure is clear and validates the essential functionality.
However, the same operator precedence issue exists in line 18:
- context$steps <- context$steps %||% 0 + 1
+ context$steps <- (context$steps %||% 0) + 1🤖 Prompt for AI Agents
In tests/acceptance/package_test_with_cucumber.feature around lines 15 to 22,
the expression context$steps <- context$steps %||% 0 + 1 has an operator
precedence issue causing incorrect increment logic. To fix it, add parentheses
around the fallback value and the addition like this: context$steps <-
(context$steps %||% 0) + 1 to ensure the default 0 is used when context$steps is
NULL before adding 1.
| Scenario: Multiple feature files | ||
| Given a file named "DESCRIPTION" with | ||
| """ | ||
| Package: example | ||
| Version: 0.1.0 | ||
| """ | ||
| And a file named "features/test1.feature" with | ||
| """ | ||
| Feature: a feature | ||
| Scenario: a scenario | ||
| Given a scenario step | ||
| Given a scenario step | ||
| Then it has 2 steps | ||
| """ | ||
| And a file named "features/test2.feature" with | ||
| """ | ||
| Feature: another feature | ||
| Scenario: another scenario | ||
| Given another scenario step | ||
| Then it has 1 steps | ||
| """ | ||
| And a file named "features/setup-steps.R" with | ||
| """ | ||
| given("a scenario step", function(context) { | ||
| context$steps <- context$steps %||% 0 + 1 | ||
| }) | ||
| given("another scenario step", function(context) { | ||
| context$steps <- context$steps %||% 0 + 1 | ||
| }) | ||
| then("it has {int} steps", function(n, context) { | ||
| expect_equal(context$steps, n) | ||
| }) | ||
| """ | ||
| When I run | ||
| """ | ||
| test("features") | ||
| """ | ||
| Then it passes | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Proper test for multiple feature file execution
This scenario tests the ability to run multiple feature files from the features/ directory, ensuring that all features are discovered and executed correctly.
The operator precedence issue appears again in lines 55 and 59:
- context$steps <- context$steps %||% 0 + 1
+ context$steps <- (context$steps %||% 0) + 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Scenario: Multiple feature files | |
| Given a file named "DESCRIPTION" with | |
| """ | |
| Package: example | |
| Version: 0.1.0 | |
| """ | |
| And a file named "features/test1.feature" with | |
| """ | |
| Feature: a feature | |
| Scenario: a scenario | |
| Given a scenario step | |
| Given a scenario step | |
| Then it has 2 steps | |
| """ | |
| And a file named "features/test2.feature" with | |
| """ | |
| Feature: another feature | |
| Scenario: another scenario | |
| Given another scenario step | |
| Then it has 1 steps | |
| """ | |
| And a file named "features/setup-steps.R" with | |
| """ | |
| given("a scenario step", function(context) { | |
| context$steps <- context$steps %||% 0 + 1 | |
| }) | |
| given("another scenario step", function(context) { | |
| context$steps <- context$steps %||% 0 + 1 | |
| }) | |
| then("it has {int} steps", function(n, context) { | |
| expect_equal(context$steps, n) | |
| }) | |
| """ | |
| When I run | |
| """ | |
| test("features") | |
| """ | |
| Then it passes | |
| given("a scenario step", function(context) { | |
| context$steps <- (context$steps %||% 0) + 1 | |
| }) | |
| given("another scenario step", function(context) { | |
| context$steps <- (context$steps %||% 0) + 1 | |
| }) | |
| then("it has {int} steps", function(n, context) { | |
| expect_equal(context$steps, n) | |
| }) |
🤖 Prompt for AI Agents
In tests/acceptance/package_test_with_cucumber.feature between lines 31 and 71,
the step definitions for incrementing context$steps use the expression
context$steps %||% 0 + 1, which due to operator precedence adds 1 to 0 before
applying %||%, causing incorrect step counting. To fix this, add parentheses
around (context$steps %||% 0) + 1 to ensure the default 0 is used when
context$steps is NULL, then increment by 1 correctly.
Summary by CodeRabbit
New Features
run()function to execute Cucumber tests within atestthatcontext.boxpackage.runfunction for public use and added documentation.Bug Fixes
Documentation
run()andtest()functions.Chores
Tests
Refactor
runtoexecuteto avoid conflicts and clarified usage.