Skip to content

Integrate Daily Civic Intelligence Engine with Issue Creation#470

Open
RohanExploit wants to merge 2 commits intomainfrom
civic-intelligence-engine-integration-9064613965019712610
Open

Integrate Daily Civic Intelligence Engine with Issue Creation#470
RohanExploit wants to merge 2 commits intomainfrom
civic-intelligence-engine-integration-9064613965019712610

Conversation

@RohanExploit
Copy link
Owner

@RohanExploit RohanExploit commented Feb 24, 2026

Integrates the pre-existing CivicIntelligenceEngine and AdaptiveWeights with the issue creation workflow in backend/routers/issues.py.
Updates create_issue to use a dynamic duplicate search radius learned from historical patterns instead of a hardcoded 50m radius.
Adds a comprehensive system test backend/tests/test_civic_intelligence_system.py to verify the daily refinement cycle, including snapshot generation and weight updates.
Verifies existing unit tests pass.
Note: The core engine logic (backend/civic_intelligence.py, backend/adaptive_weights.py) was already present in the codebase. This PR wires it up and adds system-level verification.


PR created automatically by Jules for task 9064613965019712610 started by @RohanExploit


Summary by cubic

Integrates the CivicIntelligenceEngine with issue creation by replacing the fixed 50 m duplicate search radius with a learned, dynamic radius. Adds a system test for the daily refinement cycle and fixes Render deployment by setting PYTHONPATH to '.'.

  • New Features

    • Issue creation now uses adaptive_weights.get_duplicate_search_radius for bounding box filtering and nearby search, removing the hardcoded 50 m radius.
  • Tests

    • Added a system test that runs the daily cycle, checks snapshot output and clustering, and confirms category multiplier updates (e.g., Fire -> 1.1).

Written for commit 1b17c82. Summary will update on new commits.

Summary by CodeRabbit

  • Tests

    • Added a comprehensive integration test suite to validate snapshot generation, clustering, and adaptive weight updates across scenarios.
  • Refactor

    • Improved issue deduplication by using intelligent, adaptive search radius to better identify and consolidate nearby duplicate reports.
  • Chores

    • Adjusted runtime environment configuration to update the Python module search path.

Copilot AI review requested due to automatic review settings February 24, 2026 19:04
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@netlify
Copy link

netlify bot commented Feb 24, 2026

Deploy Preview for fixmybharat canceled.

Name Link
🔨 Latest commit 1b17c82
🔍 Latest deploy log https://app.netlify.com/projects/fixmybharat/deploys/699df81a1b5ee9000919be97

@github-actions
Copy link

🙏 Thank you for your contribution, @RohanExploit!

PR Details:

Quality Checklist:
Please ensure your PR meets the following criteria:

  • Code follows the project's style guidelines
  • Self-review of code completed
  • Code is commented where necessary
  • Documentation updated (if applicable)
  • No new warnings generated
  • Tests added/updated (if applicable)
  • All tests passing locally
  • No breaking changes to existing functionality

Review Process:

  1. Automated checks will run on your code
  2. A maintainer will review your changes
  3. Address any requested changes promptly
  4. Once approved, your PR will be merged! 🎉

Note: The maintainers will monitor code quality and ensure the overall project flow isn't broken.

@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 34e9113 and 1b17c82.

📒 Files selected for processing (1)
  • render.yaml

📝 Walkthrough

Walkthrough

Replaces a hardcoded 50.0m spatial deduplication radius with a learned adaptive radius and adds an end-to-end test for the Civic Intelligence engine that verifies snapshots, clustering, and adaptive weight updates.

Changes

Cohort / File(s) Summary
Adaptive deduplication logic
backend/routers/issues.py
Replaces static 50.0 meter radius with adaptive_weights.get_duplicate_search_radius() for bounding-box filtering, nearby-issue lookups, and nearest-issue deduplication; imports adaptive_weights.
Civic Intelligence integration test
backend/tests/test_civic_intelligence_system.py
Adds an integration-style test: in-memory SQLite fixtures, mocked snapshot/weights paths, seeded issues/grievances/jurisdiction, patched SessionLocal, runs civic_intelligence_engine.run_daily_cycle(), asserts snapshot contents (civic_index, clusters) and verifies weight updates (Fire multiplier increase).
Runtime config
render.yaml
Adjusts PYTHONPATH from backend to . for the Render environment.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I measured hops, I learned to sway,
From fifty meters I hopped away.
Weights hum softly, clusters sing,
Snapshots bloom each morning ring.
A tiny carrot for smarter play 🥕✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The description covers the key changes and objectives, but the required template sections (Type of Change, Testing Done checklist items, Related Issue, Checklist) are missing or incomplete. Complete the description template by selecting Type of Change (e.g., New feature), marking Testing Done checkboxes, linking Related Issue, and confirming Checklist items.
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main change: integrating the CivicIntelligenceEngine with issue creation by using dynamic search radius instead of hardcoded values.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch civic-intelligence-engine-integration-9064613965019712610

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/routers/issues.py (1)

97-124: ⚠️ Potential issue | 🟡 Minor

No upper-bound guard on the learned radius.

get_duplicate_search_radius() returns whatever is stored in the weights file (default 50.0). If the adaptive logic or a corrupt file produces an extremely large value, the bounding-box query will scan a disproportionately large area, degrading performance and potentially returning thousands of candidates. The user-facing /api/issues/nearby endpoint already caps radius at 500 m (line 293); consider applying a similar clamp here.

Proposed fix
             search_radius = adaptive_weights.get_duplicate_search_radius()
+            # Clamp to a sane range to guard against corrupt/extreme weight values
+            search_radius = max(10.0, min(search_radius, 500.0))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routers/issues.py` around lines 97 - 124, Clamp the learned search
radius returned by adaptive_weights.get_duplicate_search_radius() to a safe
upper bound (e.g., 500 meters) before using it to compute the bounding box and
before calling find_nearby_issues; update the variable search_radius =
min(search_radius, 500) (or similar) immediately after calling
get_duplicate_search_radius() so the run_in_threadpool db.query bounding-box
filter and the subsequent find_nearby_issues call both use the clamped value,
and optionally add a debug/processLogger.warn noting when the radius was
truncated.
🧹 Nitpick comments (2)
backend/routers/issues.py (1)

97-101: Synchronous file I/O on the async event loop.

adaptive_weights.get_duplicate_search_radius() calls _check_reload(), which may stat or re-read the weights JSON file. This is blocking I/O inside an async def handler, which can stall the event loop under load. Wrap in run_in_threadpool for consistency with the rest of this function, or cache the value so the file isn't touched per-request.

Proposed fix
-            search_radius = adaptive_weights.get_duplicate_search_radius()
+            search_radius = await run_in_threadpool(adaptive_weights.get_duplicate_search_radius)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routers/issues.py` around lines 97 - 101, The call to
adaptive_weights.get_duplicate_search_radius() performs blocking file I/O via
_check_reload() inside an async handler; change this to run in a threadpool
(e.g., wrap the call with run_in_threadpool or anyio.to_thread) or read/cached
the radius once and reuse it per-request so the event loop isn't blocked—update
the call site in the async handler in backend/routers/issues.py where
search_radius is assigned, or add a cached accessor on the AdaptiveWeights
instance that returns a preloaded value instead of invoking
get_duplicate_search_radius() synchronously.
backend/tests/test_civic_intelligence_system.py (1)

14-29: Prefer an in-memory SQLite DB to avoid file-system side effects.

Using a file-based test_system.db (line 15) is fragile: it depends on CWD, collides under parallel test execution (pytest-xdist), and requires manual cleanup that can fail if the file is still locked. An in-memory DB with StaticPool avoids all of this.

Proposed fix
-TEST_DB_URL = "sqlite:///./test_system.db"
-engine = create_engine(TEST_DB_URL, connect_args={"check_same_thread": False})
+from sqlalchemy.pool import StaticPool
+
+engine = create_engine(
+    "sqlite://",
+    connect_args={"check_same_thread": False},
+    poolclass=StaticPool,
+)
 TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

And in the fixture, remove the file cleanup:

     finally:
         db.close()
         Base.metadata.drop_all(bind=engine)
-        if os.path.exists("./test_system.db"):
-            os.remove("./test_system.db")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_civic_intelligence_system.py` around lines 14 - 29,
Replace the file-based TEST_DB_URL/engine with an in-memory SQLite engine using
poolclass=StaticPool to avoid filesystem side effects: set TEST_DB_URL to
"sqlite:///:memory:", recreate engine via create_engine(TEST_DB_URL,
connect_args={"check_same_thread": False}, poolclass=StaticPool) (import
StaticPool from sqlalchemy.pool) and keep TestingSessionLocal =
sessionmaker(...). In the test_db fixture (function name test_db, and calls to
Base.metadata.create_all / drop_all and db.close), remove the file cleanup block
that checks os.path.exists("./test_system.db") and os.remove(...) since the
in-memory DB does not create a file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/tests/test_civic_intelligence_system.py`:
- Around line 143-151: The test's assertion on clusters
(snapshot_data["trends"]["clusters"]) is flaky because clustering behavior
depends on scikit-learn availability; update the test to guard against missing
sklearn by calling pytest.importorskip("sklearn") at the start of the test (or
immediately before using spatial_utils.cluster_issues_dbscan / evaluating
clusters) so the test is skipped when sklearn is not present, or alternatively
relax the assertion to accept an empty clusters result when sklearn is missing;
reference symbols: snapshot_data, clusters, spatial_utils.cluster_issues_dbscan,
and the test function name to locate the change.
- Around line 103-127: The Grievance records are being created with dummy
issue_id values (e.g., 100+i) which can violate foreign key constraints; update
the loop that creates Grievance instances (the Grievance(...) block) to use the
real Issue IDs created earlier in the test (use the Issue objects or their .id
attributes instead of hard-coded 100+i), ensure each Grievance.grievance_id
references an existing Issue.id before adding to test_db and flushing, and then
proceed to create the EscalationAudit entries
(EscalationAudit(grievance_id=g.id, ...)) and commit as before.
- Around line 129-132: The test currently patches SessionLocal to always return
the same Session instance (test_db) which gets closed by
civic_intelligence.run_daily_cycle()’s finally block; change the patch so each
SessionLocal() call returns a fresh/managed session instead of the single
test_db. Replace the current patch of SessionLocal with either a side_effect
that creates a new session for each call (e.g., side_effect=lambda:
TestSessionFactory() or a helper that calls sessionmaker()) or patch
SessionLocal to a context-manager wrapper (session_scope) that yields test_db
without letting run_daily_cycle close the underlying fixture; update references
to SessionLocal, test_db, and run_daily_cycle accordingly.

---

Outside diff comments:
In `@backend/routers/issues.py`:
- Around line 97-124: Clamp the learned search radius returned by
adaptive_weights.get_duplicate_search_radius() to a safe upper bound (e.g., 500
meters) before using it to compute the bounding box and before calling
find_nearby_issues; update the variable search_radius = min(search_radius, 500)
(or similar) immediately after calling get_duplicate_search_radius() so the
run_in_threadpool db.query bounding-box filter and the subsequent
find_nearby_issues call both use the clamped value, and optionally add a
debug/processLogger.warn noting when the radius was truncated.

---

Nitpick comments:
In `@backend/routers/issues.py`:
- Around line 97-101: The call to adaptive_weights.get_duplicate_search_radius()
performs blocking file I/O via _check_reload() inside an async handler; change
this to run in a threadpool (e.g., wrap the call with run_in_threadpool or
anyio.to_thread) or read/cached the radius once and reuse it per-request so the
event loop isn't blocked—update the call site in the async handler in
backend/routers/issues.py where search_radius is assigned, or add a cached
accessor on the AdaptiveWeights instance that returns a preloaded value instead
of invoking get_duplicate_search_radius() synchronously.

In `@backend/tests/test_civic_intelligence_system.py`:
- Around line 14-29: Replace the file-based TEST_DB_URL/engine with an in-memory
SQLite engine using poolclass=StaticPool to avoid filesystem side effects: set
TEST_DB_URL to "sqlite:///:memory:", recreate engine via
create_engine(TEST_DB_URL, connect_args={"check_same_thread": False},
poolclass=StaticPool) (import StaticPool from sqlalchemy.pool) and keep
TestingSessionLocal = sessionmaker(...). In the test_db fixture (function name
test_db, and calls to Base.metadata.create_all / drop_all and db.close), remove
the file cleanup block that checks os.path.exists("./test_system.db") and
os.remove(...) since the in-memory DB does not create a file.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 39d5fbb and 34e9113.

📒 Files selected for processing (2)
  • backend/routers/issues.py
  • backend/tests/test_civic_intelligence_system.py

Comment on lines +103 to +127
for i in range(3):
g = Grievance(
issue_id=100+i, # Dummy ID
category="Fire",
severity=SeverityLevel.CRITICAL,
status=GrievanceStatus.IN_PROGRESS,
current_jurisdiction_id=j.id,
assigned_authority="Test Auth",
sla_deadline=now + timedelta(days=1),
unique_id=f"G-{i}"
)
test_db.add(g)
test_db.flush()

audit = EscalationAudit(
grievance_id=g.id,
previous_authority="System",
new_authority="Admin",
reason=EscalationReason.SEVERITY_UPGRADE,
timestamp=now,
notes="Manual Upgrade by Admin"
)
test_db.add(audit)

test_db.commit()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Grievances reference non-existent issue_id values (100, 101, 102).

Grievance(issue_id=100+i, ...) uses dummy IDs that don't correspond to any Issue row. If PRAGMA foreign_keys is enabled (or if the test is ever ported to PostgreSQL), these inserts will fail with an FK violation. Safer to reference actual Issue IDs created earlier in the test.

Proposed fix (sketch)
+    issues = []
     for i in range(6):
         issue = Issue(
             description=f"Pothole {i}",
             ...
         )
         test_db.add(issue)
+        issues.append(issue)
+    test_db.flush()  # populate auto-generated IDs
     ...
     for i in range(3):
         g = Grievance(
-            issue_id=100+i,  # Dummy ID
+            issue_id=issues[i].id,
             ...
         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_civic_intelligence_system.py` around lines 103 - 127, The
Grievance records are being created with dummy issue_id values (e.g., 100+i)
which can violate foreign key constraints; update the loop that creates
Grievance instances (the Grievance(...) block) to use the real Issue IDs created
earlier in the test (use the Issue objects or their .id attributes instead of
hard-coded 100+i), ensure each Grievance.grievance_id references an existing
Issue.id before adding to test_db and flushing, and then proceed to create the
EscalationAudit entries (EscalationAudit(grievance_id=g.id, ...)) and commit as
before.

Comment on lines +129 to +132
# 2. Run Engine
# We need to patch SessionLocal in civic_intelligence to use our test_db
with patch("backend.civic_intelligence.SessionLocal", return_value=test_db):
civic_intelligence_engine.run_daily_cycle()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Find the civic_intelligence module
find . -name "*civic_intelligence*" -type f | head -20

Repository: RohanExploit/VishwaGuru

Length of output: 191


🏁 Script executed:

# Search for run_daily_cycle function definition
rg -n "def run_daily_cycle" --type=py

Repository: RohanExploit/VishwaGuru

Length of output: 130


🏁 Script executed:

# Search for session.close() calls in the codebase
rg -n "\.close\(\)" --type=py -A 2 -B 2 | head -100

Repository: RohanExploit/VishwaGuru

Length of output: 5337


🏁 Script executed:

# Look at SessionLocal import and usage
rg -n "SessionLocal" --type=py | head -30

Repository: RohanExploit/VishwaGuru

Length of output: 2202


🏁 Script executed:

cat -n backend/civic_intelligence.py | head -100

Repository: RohanExploit/VishwaGuru

Length of output: 4661


🏁 Script executed:

# Get the full run_daily_cycle method
sed -n '39,150p' backend/civic_intelligence.py

Repository: RohanExploit/VishwaGuru

Length of output: 5282


🏁 Script executed:

# Get the rest of run_daily_cycle to see if it closes the session
sed -n '100,200p' backend/civic_intelligence.py

Repository: RohanExploit/VishwaGuru

Length of output: 4400


Mock breaks when run_daily_cycle closes the session.

The patch patch("backend.civic_intelligence.SessionLocal", return_value=test_db) returns the same session object for every SessionLocal() call. Since run_daily_cycle() explicitly closes the session at the end of its execution (in the finally block), the fixture's test_db becomes closed and unusable for subsequent assertions or teardown. Use a context manager or side_effect to manage session lifecycle properly, or wrap the session in a session_scope utility.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_civic_intelligence_system.py` around lines 129 - 132, The
test currently patches SessionLocal to always return the same Session instance
(test_db) which gets closed by civic_intelligence.run_daily_cycle()’s finally
block; change the patch so each SessionLocal() call returns a fresh/managed
session instead of the single test_db. Replace the current patch of SessionLocal
with either a side_effect that creates a new session for each call (e.g.,
side_effect=lambda: TestSessionFactory() or a helper that calls sessionmaker())
or patch SessionLocal to a context-manager wrapper (session_scope) that yields
test_db without letting run_daily_cycle close the underlying fixture; update
references to SessionLocal, test_db, and run_daily_cycle accordingly.

Comment on lines +143 to +151
# Verify cluster detected
clusters = snapshot_data["trends"]["clusters"]
# DBSCAN clustering depends on scikit-learn availability.
# If scikit-learn is missing, it returns all issues as clusters (if fallback logic is used)
# or empty if strictly depending on sklearn.
# But current implementation in trend_analyzer calls spatial_utils.cluster_issues_dbscan
# which falls back to list of lists if sklearn missing.
# So we should see clusters.
assert len(clusters) > 0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assertion depends on scikit-learn availability — potential flaky test.

The comments on lines 145-150 acknowledge that clustering behavior varies based on whether sklearn is installed. If it's not a guaranteed dependency in the test environment, assert len(clusters) > 0 can fail unpredictably. Consider either:

  • Adding sklearn as a test dependency, or
  • Guarding with pytest.importorskip("sklearn") at the top of the test.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_civic_intelligence_system.py` around lines 143 - 151, The
test's assertion on clusters (snapshot_data["trends"]["clusters"]) is flaky
because clustering behavior depends on scikit-learn availability; update the
test to guard against missing sklearn by calling pytest.importorskip("sklearn")
at the start of the test (or immediately before using
spatial_utils.cluster_issues_dbscan / evaluating clusters) so the test is
skipped when sklearn is not present, or alternatively relax the assertion to
accept an empty clusters result when sklearn is missing; reference symbols:
snapshot_data, clusters, spatial_utils.cluster_issues_dbscan, and the test
function name to locate the change.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR integrates the pre-existing Civic Intelligence Engine with the issue creation workflow, replacing the hardcoded 50-meter duplicate search radius with a dynamic, learned radius from AdaptiveWeights. A new comprehensive system test validates the daily refinement cycle, including snapshot generation and weight updates.

Changes:

  • Modified backend/routers/issues.py to use adaptive_weights.get_duplicate_search_radius() instead of hardcoded 50m radius for duplicate detection
  • Added comprehensive system test backend/tests/test_civic_intelligence_system.py to verify the daily civic intelligence cycle end-to-end

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
backend/routers/issues.py Integrates adaptive_weights to retrieve dynamic duplicate search radius instead of using hardcoded 50m value
backend/tests/test_civic_intelligence_system.py New system-level test that validates the daily refinement cycle with real database, snapshot generation, and weight updates

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +75 to +88
# Create Issues (Cluster for Potholes to trigger radius increase if logic met)
# 6 Potholes very close to each other
for i in range(6):
issue = Issue(
description=f"Pothole {i}",
category="Pothole",
latitude=10.0001 + (i * 0.0001), # Very close
longitude=20.0001,
created_at=now,
status="open",
integrity_hash="hash",
reference_id=f"ref-{i}"
)
test_db.add(issue)
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test creates 6 Pothole issues to test clustering, but doesn't verify that the duplicate search radius is actually updated. According to the radius adjustment logic (civic_intelligence.py:131-139), the radius only increases if cluster_count > 5. With 6 issues clustered together, we'd expect 0-1 clusters (after filtering for size >= 3), which wouldn't trigger the radius increase. Consider adding an assertion to verify the radius value after the cycle runs, or adjust the test data to reliably trigger the radius update logic.

Copilot uses AI. Check for mistakes.
Comment on lines +131 to +132
with patch("backend.civic_intelligence.SessionLocal", return_value=test_db):
civic_intelligence_engine.run_daily_cycle()
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The patch for SessionLocal returns the test_db instance directly, but run_daily_cycle calls SessionLocal() which expects a factory function. This should return a lambda or callable that returns test_db, not test_db directly. The correct pattern would be: return_value=lambda: test_db or using a MagicMock that returns test_db when called.

Copilot uses AI. Check for mistakes.
Comment on lines +75 to +76
# Create Issues (Cluster for Potholes to trigger radius increase if logic met)
# 6 Potholes very close to each other
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The radius adjustment logic in civic_intelligence.py only triggers when cluster_count > 5, but this test only creates 6 issues which may result in 0, 1, or 2 clusters depending on the DBSCAN clustering (requires at least 3 issues per cluster, and 6 very close issues might form 1 or 2 clusters). The test doesn't verify whether the radius actually changed, and the comment "trigger radius increase if logic met" suggests an expectation that may not be met. Consider either creating more issues to reliably trigger cluster_count > 5, or adjusting the test expectations.

Copilot uses AI. Check for mistakes.
Comment on lines +54 to +61
with patch("backend.adaptive_weights.DATA_FILE", str(weights_file)):
# Reset singleton to ensure it loads from new file
AdaptiveWeights._instance = None
# Initialize
AdaptiveWeights()
yield str(weights_file)
# Cleanup
AdaptiveWeights._instance = None
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The patch context for DATA_FILE exits after yield (line 59), but the AdaptiveWeights singleton instance persists with the temporary file path loaded. This could cause issues if other tests run after this one, as they would still have the singleton pointing to a non-existent temporary file. Consider moving the final AdaptiveWeights._instance = None cleanup outside the patch context, or ensure the singleton is reset before the patch exits.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 issues found across 2 files

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="backend/tests/test_civic_intelligence_system.py">

<violation number="1" location="backend/tests/test_civic_intelligence_system.py:15">
P3: Using a fixed SQLite file path for the system test makes the test non-isolated; parallel runs or repeated runs can collide on `./test_system.db` or fail cleanup due to open connections. Prefer a per-test temporary DB path (tmp_path) and create/dispose the engine inside the fixture to guarantee isolation.</violation>

<violation number="2" location="backend/tests/test_civic_intelligence_system.py:105">
P2: Grievances are created with hardcoded `issue_id=100+i` values that don't correspond to any actual `Issue` rows in the test database. If foreign key constraints are enabled (SQLite `PRAGMA foreign_keys` or PostgreSQL), these inserts will fail. Instead, collect the `Issue` objects created earlier, call `test_db.flush()` to populate their auto-generated IDs, and reference `issues[i].id` here.</violation>

<violation number="3" location="backend/tests/test_civic_intelligence_system.py:131">
P1: The mock `patch("backend.civic_intelligence.SessionLocal", return_value=test_db)` passes the test's own session as the return value. Since `run_daily_cycle()` closes the session in its `finally` block, `test_db` will be closed after this call, making all subsequent assertions and fixture teardown fail or behave unexpectedly. Use a `side_effect` that returns a separate session or a wrapper that prevents closing the shared test session.</violation>

<violation number="4" location="backend/tests/test_civic_intelligence_system.py:151">
P2: This assertion will fail if `scikit-learn` is not installed in the test environment, since clustering behavior depends on it. Guard this test with `pytest.importorskip("sklearn")` at the top of the test function, or add `scikit-learn` as a required test dependency to prevent flaky failures.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.


# 2. Run Engine
# We need to patch SessionLocal in civic_intelligence to use our test_db
with patch("backend.civic_intelligence.SessionLocal", return_value=test_db):
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: The mock patch("backend.civic_intelligence.SessionLocal", return_value=test_db) passes the test's own session as the return value. Since run_daily_cycle() closes the session in its finally block, test_db will be closed after this call, making all subsequent assertions and fixture teardown fail or behave unexpectedly. Use a side_effect that returns a separate session or a wrapper that prevents closing the shared test session.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/test_civic_intelligence_system.py, line 131:

<comment>The mock `patch("backend.civic_intelligence.SessionLocal", return_value=test_db)` passes the test's own session as the return value. Since `run_daily_cycle()` closes the session in its `finally` block, `test_db` will be closed after this call, making all subsequent assertions and fixture teardown fail or behave unexpectedly. Use a `side_effect` that returns a separate session or a wrapper that prevents closing the shared test session.</comment>

<file context>
@@ -0,0 +1,163 @@
+
+    # 2. Run Engine
+    # We need to patch SessionLocal in civic_intelligence to use our test_db
+    with patch("backend.civic_intelligence.SessionLocal", return_value=test_db):
+        civic_intelligence_engine.run_daily_cycle()
+
</file context>
Fix with Cubic

# But current implementation in trend_analyzer calls spatial_utils.cluster_issues_dbscan
# which falls back to list of lists if sklearn missing.
# So we should see clusters.
assert len(clusters) > 0
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: This assertion will fail if scikit-learn is not installed in the test environment, since clustering behavior depends on it. Guard this test with pytest.importorskip("sklearn") at the top of the test function, or add scikit-learn as a required test dependency to prevent flaky failures.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/test_civic_intelligence_system.py, line 151:

<comment>This assertion will fail if `scikit-learn` is not installed in the test environment, since clustering behavior depends on it. Guard this test with `pytest.importorskip("sklearn")` at the top of the test function, or add `scikit-learn` as a required test dependency to prevent flaky failures.</comment>

<file context>
@@ -0,0 +1,163 @@
+    # But current implementation in trend_analyzer calls spatial_utils.cluster_issues_dbscan
+    # which falls back to list of lists if sklearn missing.
+    # So we should see clusters.
+    assert len(clusters) > 0
+
+    # 4. Verify Weight Updates
</file context>
Fix with Cubic


for i in range(3):
g = Grievance(
issue_id=100+i, # Dummy ID
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Grievances are created with hardcoded issue_id=100+i values that don't correspond to any actual Issue rows in the test database. If foreign key constraints are enabled (SQLite PRAGMA foreign_keys or PostgreSQL), these inserts will fail. Instead, collect the Issue objects created earlier, call test_db.flush() to populate their auto-generated IDs, and reference issues[i].id here.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/test_civic_intelligence_system.py, line 105:

<comment>Grievances are created with hardcoded `issue_id=100+i` values that don't correspond to any actual `Issue` rows in the test database. If foreign key constraints are enabled (SQLite `PRAGMA foreign_keys` or PostgreSQL), these inserts will fail. Instead, collect the `Issue` objects created earlier, call `test_db.flush()` to populate their auto-generated IDs, and reference `issues[i].id` here.</comment>

<file context>
@@ -0,0 +1,163 @@
+
+    for i in range(3):
+        g = Grievance(
+            issue_id=100+i, # Dummy ID
+            category="Fire",
+            severity=SeverityLevel.CRITICAL,
</file context>
Fix with Cubic

from backend.adaptive_weights import AdaptiveWeights

# Test DB Setup
TEST_DB_URL = "sqlite:///./test_system.db"
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: Using a fixed SQLite file path for the system test makes the test non-isolated; parallel runs or repeated runs can collide on ./test_system.db or fail cleanup due to open connections. Prefer a per-test temporary DB path (tmp_path) and create/dispose the engine inside the fixture to guarantee isolation.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/test_civic_intelligence_system.py, line 15:

<comment>Using a fixed SQLite file path for the system test makes the test non-isolated; parallel runs or repeated runs can collide on `./test_system.db` or fail cleanup due to open connections. Prefer a per-test temporary DB path (tmp_path) and create/dispose the engine inside the fixture to guarantee isolation.</comment>

<file context>
@@ -0,0 +1,163 @@
+from backend.adaptive_weights import AdaptiveWeights
+
+# Test DB Setup
+TEST_DB_URL = "sqlite:///./test_system.db"
+engine = create_engine(TEST_DB_URL, connect_args={"check_same_thread": False})
+TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
</file context>
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants