Skip to content

feat: add support for dedup #14

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 31, 2025
Merged

Conversation

officialasishkumar
Copy link
Member

Issue that this pull request solves

Closes: https://github.com/keploy/enterprise/issues/1350

Proposed changes

Brief description of what is fixed or changed

Types of changes

Put an x in the boxes that apply

  • Bugfix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Other (please describe):

Checklist

Put an x in the boxes that apply

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have created new branch for this pull request
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • My changes does not break the current system and it passes all the current test cases.

Signed-off-by: Asish Kumar <officialasishkumar@gmail.com>
@Copilot Copilot AI review requested due to automatic review settings July 31, 2025 08:07
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the Keploy Python SDK to implement a new coverage agent architecture based on Unix sockets. The old framework-specific middleware approach has been completely removed in favor of a unified agent that communicates with Keploy over sockets.

  • Replaces framework-specific coverage middleware with a unified socket-based agent
  • Switches from YAML file output to real-time JSON socket communication
  • Updates packaging from setup.py to modern pyproject.toml format

Reviewed Changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
src/keploy_agent/init.py New coverage agent implementation using Unix sockets for communication
src/keploy/utils.py Removed legacy YAML-based dedup functionality
src/keploy/flaskCov.py Removed Flask-specific coverage middleware
src/keploy/fastApiCov.py Removed FastAPI-specific coverage middleware
src/keploy/djangoCov.py Removed Django-specific coverage middleware
src/keploy/init.py Removed legacy SDK exports
src/keploy/Keploy.py Removed legacy test runner implementation
setup.py Removed legacy setup.py configuration
pyproject.toml Added modern project configuration
init.py Removed root-level legacy exports
README.md Updated documentation for new agent-based approach
.github/workflows/main.yml Updated CI to use modern build tools

Comment on lines +9 to +10
# Set up logging to be informative
logging.basicConfig(level=logging.INFO, format='[Keploy Agent] %(asctime)s - %(levelname)s - %(message)s')
Copy link
Preview

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling logging.basicConfig() at module level may interfere with the application's logging configuration. Consider using a module-specific logger with getLogger(__name__) instead of configuring the root logger.

Suggested change
# Set up logging to be informative
logging.basicConfig(level=logging.INFO, format='[Keploy Agent] %(asctime)s - %(levelname)s - %(message)s')
# Set up a module-specific logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Keploy Agent] %(asctime)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)

Copilot uses AI. Check for mistakes.

# Stores the ID of the test case currently being recorded
current_test_id = None

cov = coverage.Coverage(data_file=None, auto_data=True)
Copy link
Preview

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The global coverage instance should be properly initialized when coverage starts rather than at import time. This could cause issues if multiple tests run concurrently or if the coverage instance needs to be reset.

Suggested change
cov = coverage.Coverage(data_file=None, auto_data=True)
cov = None # Placeholder for the coverage instance, initialized dynamically

Copilot uses AI. Check for mistakes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need it during import time

Comment on lines +141 to +150
while True:
conn, _ = server.accept()
handler_thread = threading.Thread(target=handle_control_request, args=(conn,))
handler_thread.start()

except Exception as e:
logging.error(f"Control server failed: {e}", exc_info=True)
finally:
server.close()
logging.info("Control server shut down.")
Copy link
Preview

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Handler threads are created but never joined or tracked for cleanup. This could lead to resource leaks if many connections are made. Consider using a thread pool or tracking threads for proper cleanup.

Suggested change
while True:
conn, _ = server.accept()
handler_thread = threading.Thread(target=handle_control_request, args=(conn,))
handler_thread.start()
except Exception as e:
logging.error(f"Control server failed: {e}", exc_info=True)
finally:
server.close()
logging.info("Control server shut down.")
with ThreadPoolExecutor(max_workers=10) as executor: # Limit to 10 concurrent threads
while True:
conn, _ = server.accept()
executor.submit(handle_control_request, conn)
except Exception as e:
logging.error(f"Control server failed: {e}", exc_info=True)
finally:
server.close()
logging.info("Control server shut down.")
# Ensure all threads in the pool are cleaned up
executor.shutdown(wait=True)

Copilot uses AI. Check for mistakes.

Comment on lines +157 to +161
# Start the control server in a background daemon thread.
control_thread = threading.Thread(target=start_control_server, daemon=True)
control_thread.start()

logging.info("Agent initialized and control server started in the background.")
Copy link
Preview

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a daemon thread for the control server means it will be forcibly terminated when the main program exits, potentially interrupting active coverage collection. Consider implementing graceful shutdown instead.

Suggested change
# Start the control server in a background daemon thread.
control_thread = threading.Thread(target=start_control_server, daemon=True)
control_thread.start()
logging.info("Agent initialized and control server started in the background.")
# Initialize the shutdown event and start the control server in a background thread.
shutdown_event = threading.Event()
control_thread = threading.Thread(target=start_control_server, args=(shutdown_event,))
control_thread.start()
logging.info("Agent initialized and control server started in the background.")
# Register a shutdown hook to ensure graceful termination.
def shutdown_handler():
logging.info("Shutdown signal received. Stopping control server...")
shutdown_event.set()
control_thread.join()
logging.info("Control server stopped.")
import atexit
atexit.register(shutdown_handler)

Copilot uses AI. Check for mistakes.

Signed-off-by: Asish Kumar <officialasishkumar@gmail.com>
@Sarthak160 Sarthak160 merged commit d68abb7 into keploy:main Jul 31, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants