Integrate Daily Civic Intelligence Engine with Issue Creation#470
Integrate Daily Civic Intelligence Engine with Issue Creation#470RohanExploit wants to merge 2 commits intomainfrom
Conversation
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
✅ Deploy Preview for fixmybharat canceled.
|
🙏 Thank you for your contribution, @RohanExploit!PR Details:
Quality Checklist:
Review Process:
Note: The maintainers will monitor code quality and ensure the overall project flow isn't broken. |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughReplaces a hardcoded 50.0m spatial deduplication radius with a learned adaptive radius and adds an end-to-end test for the Civic Intelligence engine that verifies snapshots, clustering, and adaptive weight updates. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/routers/issues.py (1)
97-124:⚠️ Potential issue | 🟡 MinorNo upper-bound guard on the learned radius.
get_duplicate_search_radius()returns whatever is stored in the weights file (default 50.0). If the adaptive logic or a corrupt file produces an extremely large value, the bounding-box query will scan a disproportionately large area, degrading performance and potentially returning thousands of candidates. The user-facing/api/issues/nearbyendpoint already caps radius at 500 m (line 293); consider applying a similar clamp here.Proposed fix
search_radius = adaptive_weights.get_duplicate_search_radius() + # Clamp to a sane range to guard against corrupt/extreme weight values + search_radius = max(10.0, min(search_radius, 500.0))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routers/issues.py` around lines 97 - 124, Clamp the learned search radius returned by adaptive_weights.get_duplicate_search_radius() to a safe upper bound (e.g., 500 meters) before using it to compute the bounding box and before calling find_nearby_issues; update the variable search_radius = min(search_radius, 500) (or similar) immediately after calling get_duplicate_search_radius() so the run_in_threadpool db.query bounding-box filter and the subsequent find_nearby_issues call both use the clamped value, and optionally add a debug/processLogger.warn noting when the radius was truncated.
🧹 Nitpick comments (2)
backend/routers/issues.py (1)
97-101: Synchronous file I/O on the async event loop.
adaptive_weights.get_duplicate_search_radius()calls_check_reload(), which maystator re-read the weights JSON file. This is blocking I/O inside anasync defhandler, which can stall the event loop under load. Wrap inrun_in_threadpoolfor consistency with the rest of this function, or cache the value so the file isn't touched per-request.Proposed fix
- search_radius = adaptive_weights.get_duplicate_search_radius() + search_radius = await run_in_threadpool(adaptive_weights.get_duplicate_search_radius)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routers/issues.py` around lines 97 - 101, The call to adaptive_weights.get_duplicate_search_radius() performs blocking file I/O via _check_reload() inside an async handler; change this to run in a threadpool (e.g., wrap the call with run_in_threadpool or anyio.to_thread) or read/cached the radius once and reuse it per-request so the event loop isn't blocked—update the call site in the async handler in backend/routers/issues.py where search_radius is assigned, or add a cached accessor on the AdaptiveWeights instance that returns a preloaded value instead of invoking get_duplicate_search_radius() synchronously.backend/tests/test_civic_intelligence_system.py (1)
14-29: Prefer an in-memory SQLite DB to avoid file-system side effects.Using a file-based
test_system.db(line 15) is fragile: it depends on CWD, collides under parallel test execution (pytest-xdist), and requires manual cleanup that can fail if the file is still locked. An in-memory DB withStaticPoolavoids all of this.Proposed fix
-TEST_DB_URL = "sqlite:///./test_system.db" -engine = create_engine(TEST_DB_URL, connect_args={"check_same_thread": False}) +from sqlalchemy.pool import StaticPool + +engine = create_engine( + "sqlite://", + connect_args={"check_same_thread": False}, + poolclass=StaticPool, +) TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)And in the fixture, remove the file cleanup:
finally: db.close() Base.metadata.drop_all(bind=engine) - if os.path.exists("./test_system.db"): - os.remove("./test_system.db")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_civic_intelligence_system.py` around lines 14 - 29, Replace the file-based TEST_DB_URL/engine with an in-memory SQLite engine using poolclass=StaticPool to avoid filesystem side effects: set TEST_DB_URL to "sqlite:///:memory:", recreate engine via create_engine(TEST_DB_URL, connect_args={"check_same_thread": False}, poolclass=StaticPool) (import StaticPool from sqlalchemy.pool) and keep TestingSessionLocal = sessionmaker(...). In the test_db fixture (function name test_db, and calls to Base.metadata.create_all / drop_all and db.close), remove the file cleanup block that checks os.path.exists("./test_system.db") and os.remove(...) since the in-memory DB does not create a file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/tests/test_civic_intelligence_system.py`:
- Around line 143-151: The test's assertion on clusters
(snapshot_data["trends"]["clusters"]) is flaky because clustering behavior
depends on scikit-learn availability; update the test to guard against missing
sklearn by calling pytest.importorskip("sklearn") at the start of the test (or
immediately before using spatial_utils.cluster_issues_dbscan / evaluating
clusters) so the test is skipped when sklearn is not present, or alternatively
relax the assertion to accept an empty clusters result when sklearn is missing;
reference symbols: snapshot_data, clusters, spatial_utils.cluster_issues_dbscan,
and the test function name to locate the change.
- Around line 103-127: The Grievance records are being created with dummy
issue_id values (e.g., 100+i) which can violate foreign key constraints; update
the loop that creates Grievance instances (the Grievance(...) block) to use the
real Issue IDs created earlier in the test (use the Issue objects or their .id
attributes instead of hard-coded 100+i), ensure each Grievance.grievance_id
references an existing Issue.id before adding to test_db and flushing, and then
proceed to create the EscalationAudit entries
(EscalationAudit(grievance_id=g.id, ...)) and commit as before.
- Around line 129-132: The test currently patches SessionLocal to always return
the same Session instance (test_db) which gets closed by
civic_intelligence.run_daily_cycle()’s finally block; change the patch so each
SessionLocal() call returns a fresh/managed session instead of the single
test_db. Replace the current patch of SessionLocal with either a side_effect
that creates a new session for each call (e.g., side_effect=lambda:
TestSessionFactory() or a helper that calls sessionmaker()) or patch
SessionLocal to a context-manager wrapper (session_scope) that yields test_db
without letting run_daily_cycle close the underlying fixture; update references
to SessionLocal, test_db, and run_daily_cycle accordingly.
---
Outside diff comments:
In `@backend/routers/issues.py`:
- Around line 97-124: Clamp the learned search radius returned by
adaptive_weights.get_duplicate_search_radius() to a safe upper bound (e.g., 500
meters) before using it to compute the bounding box and before calling
find_nearby_issues; update the variable search_radius = min(search_radius, 500)
(or similar) immediately after calling get_duplicate_search_radius() so the
run_in_threadpool db.query bounding-box filter and the subsequent
find_nearby_issues call both use the clamped value, and optionally add a
debug/processLogger.warn noting when the radius was truncated.
---
Nitpick comments:
In `@backend/routers/issues.py`:
- Around line 97-101: The call to adaptive_weights.get_duplicate_search_radius()
performs blocking file I/O via _check_reload() inside an async handler; change
this to run in a threadpool (e.g., wrap the call with run_in_threadpool or
anyio.to_thread) or read/cached the radius once and reuse it per-request so the
event loop isn't blocked—update the call site in the async handler in
backend/routers/issues.py where search_radius is assigned, or add a cached
accessor on the AdaptiveWeights instance that returns a preloaded value instead
of invoking get_duplicate_search_radius() synchronously.
In `@backend/tests/test_civic_intelligence_system.py`:
- Around line 14-29: Replace the file-based TEST_DB_URL/engine with an in-memory
SQLite engine using poolclass=StaticPool to avoid filesystem side effects: set
TEST_DB_URL to "sqlite:///:memory:", recreate engine via
create_engine(TEST_DB_URL, connect_args={"check_same_thread": False},
poolclass=StaticPool) (import StaticPool from sqlalchemy.pool) and keep
TestingSessionLocal = sessionmaker(...). In the test_db fixture (function name
test_db, and calls to Base.metadata.create_all / drop_all and db.close), remove
the file cleanup block that checks os.path.exists("./test_system.db") and
os.remove(...) since the in-memory DB does not create a file.
| for i in range(3): | ||
| g = Grievance( | ||
| issue_id=100+i, # Dummy ID | ||
| category="Fire", | ||
| severity=SeverityLevel.CRITICAL, | ||
| status=GrievanceStatus.IN_PROGRESS, | ||
| current_jurisdiction_id=j.id, | ||
| assigned_authority="Test Auth", | ||
| sla_deadline=now + timedelta(days=1), | ||
| unique_id=f"G-{i}" | ||
| ) | ||
| test_db.add(g) | ||
| test_db.flush() | ||
|
|
||
| audit = EscalationAudit( | ||
| grievance_id=g.id, | ||
| previous_authority="System", | ||
| new_authority="Admin", | ||
| reason=EscalationReason.SEVERITY_UPGRADE, | ||
| timestamp=now, | ||
| notes="Manual Upgrade by Admin" | ||
| ) | ||
| test_db.add(audit) | ||
|
|
||
| test_db.commit() |
There was a problem hiding this comment.
Grievances reference non-existent issue_id values (100, 101, 102).
Grievance(issue_id=100+i, ...) uses dummy IDs that don't correspond to any Issue row. If PRAGMA foreign_keys is enabled (or if the test is ever ported to PostgreSQL), these inserts will fail with an FK violation. Safer to reference actual Issue IDs created earlier in the test.
Proposed fix (sketch)
+ issues = []
for i in range(6):
issue = Issue(
description=f"Pothole {i}",
...
)
test_db.add(issue)
+ issues.append(issue)
+ test_db.flush() # populate auto-generated IDs
...
for i in range(3):
g = Grievance(
- issue_id=100+i, # Dummy ID
+ issue_id=issues[i].id,
...
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_civic_intelligence_system.py` around lines 103 - 127, The
Grievance records are being created with dummy issue_id values (e.g., 100+i)
which can violate foreign key constraints; update the loop that creates
Grievance instances (the Grievance(...) block) to use the real Issue IDs created
earlier in the test (use the Issue objects or their .id attributes instead of
hard-coded 100+i), ensure each Grievance.grievance_id references an existing
Issue.id before adding to test_db and flushing, and then proceed to create the
EscalationAudit entries (EscalationAudit(grievance_id=g.id, ...)) and commit as
before.
| # 2. Run Engine | ||
| # We need to patch SessionLocal in civic_intelligence to use our test_db | ||
| with patch("backend.civic_intelligence.SessionLocal", return_value=test_db): | ||
| civic_intelligence_engine.run_daily_cycle() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the civic_intelligence module
find . -name "*civic_intelligence*" -type f | head -20Repository: RohanExploit/VishwaGuru
Length of output: 191
🏁 Script executed:
# Search for run_daily_cycle function definition
rg -n "def run_daily_cycle" --type=pyRepository: RohanExploit/VishwaGuru
Length of output: 130
🏁 Script executed:
# Search for session.close() calls in the codebase
rg -n "\.close\(\)" --type=py -A 2 -B 2 | head -100Repository: RohanExploit/VishwaGuru
Length of output: 5337
🏁 Script executed:
# Look at SessionLocal import and usage
rg -n "SessionLocal" --type=py | head -30Repository: RohanExploit/VishwaGuru
Length of output: 2202
🏁 Script executed:
cat -n backend/civic_intelligence.py | head -100Repository: RohanExploit/VishwaGuru
Length of output: 4661
🏁 Script executed:
# Get the full run_daily_cycle method
sed -n '39,150p' backend/civic_intelligence.pyRepository: RohanExploit/VishwaGuru
Length of output: 5282
🏁 Script executed:
# Get the rest of run_daily_cycle to see if it closes the session
sed -n '100,200p' backend/civic_intelligence.pyRepository: RohanExploit/VishwaGuru
Length of output: 4400
Mock breaks when run_daily_cycle closes the session.
The patch patch("backend.civic_intelligence.SessionLocal", return_value=test_db) returns the same session object for every SessionLocal() call. Since run_daily_cycle() explicitly closes the session at the end of its execution (in the finally block), the fixture's test_db becomes closed and unusable for subsequent assertions or teardown. Use a context manager or side_effect to manage session lifecycle properly, or wrap the session in a session_scope utility.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_civic_intelligence_system.py` around lines 129 - 132, The
test currently patches SessionLocal to always return the same Session instance
(test_db) which gets closed by civic_intelligence.run_daily_cycle()’s finally
block; change the patch so each SessionLocal() call returns a fresh/managed
session instead of the single test_db. Replace the current patch of SessionLocal
with either a side_effect that creates a new session for each call (e.g.,
side_effect=lambda: TestSessionFactory() or a helper that calls sessionmaker())
or patch SessionLocal to a context-manager wrapper (session_scope) that yields
test_db without letting run_daily_cycle close the underlying fixture; update
references to SessionLocal, test_db, and run_daily_cycle accordingly.
| # Verify cluster detected | ||
| clusters = snapshot_data["trends"]["clusters"] | ||
| # DBSCAN clustering depends on scikit-learn availability. | ||
| # If scikit-learn is missing, it returns all issues as clusters (if fallback logic is used) | ||
| # or empty if strictly depending on sklearn. | ||
| # But current implementation in trend_analyzer calls spatial_utils.cluster_issues_dbscan | ||
| # which falls back to list of lists if sklearn missing. | ||
| # So we should see clusters. | ||
| assert len(clusters) > 0 |
There was a problem hiding this comment.
Assertion depends on scikit-learn availability — potential flaky test.
The comments on lines 145-150 acknowledge that clustering behavior varies based on whether sklearn is installed. If it's not a guaranteed dependency in the test environment, assert len(clusters) > 0 can fail unpredictably. Consider either:
- Adding
sklearnas a test dependency, or - Guarding with
pytest.importorskip("sklearn")at the top of the test.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_civic_intelligence_system.py` around lines 143 - 151, The
test's assertion on clusters (snapshot_data["trends"]["clusters"]) is flaky
because clustering behavior depends on scikit-learn availability; update the
test to guard against missing sklearn by calling pytest.importorskip("sklearn")
at the start of the test (or immediately before using
spatial_utils.cluster_issues_dbscan / evaluating clusters) so the test is
skipped when sklearn is not present, or alternatively relax the assertion to
accept an empty clusters result when sklearn is missing; reference symbols:
snapshot_data, clusters, spatial_utils.cluster_issues_dbscan, and the test
function name to locate the change.
There was a problem hiding this comment.
Pull request overview
This PR integrates the pre-existing Civic Intelligence Engine with the issue creation workflow, replacing the hardcoded 50-meter duplicate search radius with a dynamic, learned radius from AdaptiveWeights. A new comprehensive system test validates the daily refinement cycle, including snapshot generation and weight updates.
Changes:
- Modified
backend/routers/issues.pyto useadaptive_weights.get_duplicate_search_radius()instead of hardcoded 50m radius for duplicate detection - Added comprehensive system test
backend/tests/test_civic_intelligence_system.pyto verify the daily civic intelligence cycle end-to-end
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| backend/routers/issues.py | Integrates adaptive_weights to retrieve dynamic duplicate search radius instead of using hardcoded 50m value |
| backend/tests/test_civic_intelligence_system.py | New system-level test that validates the daily refinement cycle with real database, snapshot generation, and weight updates |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Create Issues (Cluster for Potholes to trigger radius increase if logic met) | ||
| # 6 Potholes very close to each other | ||
| for i in range(6): | ||
| issue = Issue( | ||
| description=f"Pothole {i}", | ||
| category="Pothole", | ||
| latitude=10.0001 + (i * 0.0001), # Very close | ||
| longitude=20.0001, | ||
| created_at=now, | ||
| status="open", | ||
| integrity_hash="hash", | ||
| reference_id=f"ref-{i}" | ||
| ) | ||
| test_db.add(issue) |
There was a problem hiding this comment.
The test creates 6 Pothole issues to test clustering, but doesn't verify that the duplicate search radius is actually updated. According to the radius adjustment logic (civic_intelligence.py:131-139), the radius only increases if cluster_count > 5. With 6 issues clustered together, we'd expect 0-1 clusters (after filtering for size >= 3), which wouldn't trigger the radius increase. Consider adding an assertion to verify the radius value after the cycle runs, or adjust the test data to reliably trigger the radius update logic.
| with patch("backend.civic_intelligence.SessionLocal", return_value=test_db): | ||
| civic_intelligence_engine.run_daily_cycle() |
There was a problem hiding this comment.
The patch for SessionLocal returns the test_db instance directly, but run_daily_cycle calls SessionLocal() which expects a factory function. This should return a lambda or callable that returns test_db, not test_db directly. The correct pattern would be: return_value=lambda: test_db or using a MagicMock that returns test_db when called.
| # Create Issues (Cluster for Potholes to trigger radius increase if logic met) | ||
| # 6 Potholes very close to each other |
There was a problem hiding this comment.
The radius adjustment logic in civic_intelligence.py only triggers when cluster_count > 5, but this test only creates 6 issues which may result in 0, 1, or 2 clusters depending on the DBSCAN clustering (requires at least 3 issues per cluster, and 6 very close issues might form 1 or 2 clusters). The test doesn't verify whether the radius actually changed, and the comment "trigger radius increase if logic met" suggests an expectation that may not be met. Consider either creating more issues to reliably trigger cluster_count > 5, or adjusting the test expectations.
| with patch("backend.adaptive_weights.DATA_FILE", str(weights_file)): | ||
| # Reset singleton to ensure it loads from new file | ||
| AdaptiveWeights._instance = None | ||
| # Initialize | ||
| AdaptiveWeights() | ||
| yield str(weights_file) | ||
| # Cleanup | ||
| AdaptiveWeights._instance = None |
There was a problem hiding this comment.
The patch context for DATA_FILE exits after yield (line 59), but the AdaptiveWeights singleton instance persists with the temporary file path loaded. This could cause issues if other tests run after this one, as they would still have the singleton pointing to a non-existent temporary file. Consider moving the final AdaptiveWeights._instance = None cleanup outside the patch context, or ensure the singleton is reset before the patch exits.
There was a problem hiding this comment.
4 issues found across 2 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="backend/tests/test_civic_intelligence_system.py">
<violation number="1" location="backend/tests/test_civic_intelligence_system.py:15">
P3: Using a fixed SQLite file path for the system test makes the test non-isolated; parallel runs or repeated runs can collide on `./test_system.db` or fail cleanup due to open connections. Prefer a per-test temporary DB path (tmp_path) and create/dispose the engine inside the fixture to guarantee isolation.</violation>
<violation number="2" location="backend/tests/test_civic_intelligence_system.py:105">
P2: Grievances are created with hardcoded `issue_id=100+i` values that don't correspond to any actual `Issue` rows in the test database. If foreign key constraints are enabled (SQLite `PRAGMA foreign_keys` or PostgreSQL), these inserts will fail. Instead, collect the `Issue` objects created earlier, call `test_db.flush()` to populate their auto-generated IDs, and reference `issues[i].id` here.</violation>
<violation number="3" location="backend/tests/test_civic_intelligence_system.py:131">
P1: The mock `patch("backend.civic_intelligence.SessionLocal", return_value=test_db)` passes the test's own session as the return value. Since `run_daily_cycle()` closes the session in its `finally` block, `test_db` will be closed after this call, making all subsequent assertions and fixture teardown fail or behave unexpectedly. Use a `side_effect` that returns a separate session or a wrapper that prevents closing the shared test session.</violation>
<violation number="4" location="backend/tests/test_civic_intelligence_system.py:151">
P2: This assertion will fail if `scikit-learn` is not installed in the test environment, since clustering behavior depends on it. Guard this test with `pytest.importorskip("sklearn")` at the top of the test function, or add `scikit-learn` as a required test dependency to prevent flaky failures.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
|
||
| # 2. Run Engine | ||
| # We need to patch SessionLocal in civic_intelligence to use our test_db | ||
| with patch("backend.civic_intelligence.SessionLocal", return_value=test_db): |
There was a problem hiding this comment.
P1: The mock patch("backend.civic_intelligence.SessionLocal", return_value=test_db) passes the test's own session as the return value. Since run_daily_cycle() closes the session in its finally block, test_db will be closed after this call, making all subsequent assertions and fixture teardown fail or behave unexpectedly. Use a side_effect that returns a separate session or a wrapper that prevents closing the shared test session.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/test_civic_intelligence_system.py, line 131:
<comment>The mock `patch("backend.civic_intelligence.SessionLocal", return_value=test_db)` passes the test's own session as the return value. Since `run_daily_cycle()` closes the session in its `finally` block, `test_db` will be closed after this call, making all subsequent assertions and fixture teardown fail or behave unexpectedly. Use a `side_effect` that returns a separate session or a wrapper that prevents closing the shared test session.</comment>
<file context>
@@ -0,0 +1,163 @@
+
+ # 2. Run Engine
+ # We need to patch SessionLocal in civic_intelligence to use our test_db
+ with patch("backend.civic_intelligence.SessionLocal", return_value=test_db):
+ civic_intelligence_engine.run_daily_cycle()
+
</file context>
| # But current implementation in trend_analyzer calls spatial_utils.cluster_issues_dbscan | ||
| # which falls back to list of lists if sklearn missing. | ||
| # So we should see clusters. | ||
| assert len(clusters) > 0 |
There was a problem hiding this comment.
P2: This assertion will fail if scikit-learn is not installed in the test environment, since clustering behavior depends on it. Guard this test with pytest.importorskip("sklearn") at the top of the test function, or add scikit-learn as a required test dependency to prevent flaky failures.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/test_civic_intelligence_system.py, line 151:
<comment>This assertion will fail if `scikit-learn` is not installed in the test environment, since clustering behavior depends on it. Guard this test with `pytest.importorskip("sklearn")` at the top of the test function, or add `scikit-learn` as a required test dependency to prevent flaky failures.</comment>
<file context>
@@ -0,0 +1,163 @@
+ # But current implementation in trend_analyzer calls spatial_utils.cluster_issues_dbscan
+ # which falls back to list of lists if sklearn missing.
+ # So we should see clusters.
+ assert len(clusters) > 0
+
+ # 4. Verify Weight Updates
</file context>
|
|
||
| for i in range(3): | ||
| g = Grievance( | ||
| issue_id=100+i, # Dummy ID |
There was a problem hiding this comment.
P2: Grievances are created with hardcoded issue_id=100+i values that don't correspond to any actual Issue rows in the test database. If foreign key constraints are enabled (SQLite PRAGMA foreign_keys or PostgreSQL), these inserts will fail. Instead, collect the Issue objects created earlier, call test_db.flush() to populate their auto-generated IDs, and reference issues[i].id here.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/test_civic_intelligence_system.py, line 105:
<comment>Grievances are created with hardcoded `issue_id=100+i` values that don't correspond to any actual `Issue` rows in the test database. If foreign key constraints are enabled (SQLite `PRAGMA foreign_keys` or PostgreSQL), these inserts will fail. Instead, collect the `Issue` objects created earlier, call `test_db.flush()` to populate their auto-generated IDs, and reference `issues[i].id` here.</comment>
<file context>
@@ -0,0 +1,163 @@
+
+ for i in range(3):
+ g = Grievance(
+ issue_id=100+i, # Dummy ID
+ category="Fire",
+ severity=SeverityLevel.CRITICAL,
</file context>
| from backend.adaptive_weights import AdaptiveWeights | ||
|
|
||
| # Test DB Setup | ||
| TEST_DB_URL = "sqlite:///./test_system.db" |
There was a problem hiding this comment.
P3: Using a fixed SQLite file path for the system test makes the test non-isolated; parallel runs or repeated runs can collide on ./test_system.db or fail cleanup due to open connections. Prefer a per-test temporary DB path (tmp_path) and create/dispose the engine inside the fixture to guarantee isolation.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/test_civic_intelligence_system.py, line 15:
<comment>Using a fixed SQLite file path for the system test makes the test non-isolated; parallel runs or repeated runs can collide on `./test_system.db` or fail cleanup due to open connections. Prefer a per-test temporary DB path (tmp_path) and create/dispose the engine inside the fixture to guarantee isolation.</comment>
<file context>
@@ -0,0 +1,163 @@
+from backend.adaptive_weights import AdaptiveWeights
+
+# Test DB Setup
+TEST_DB_URL = "sqlite:///./test_system.db"
+engine = create_engine(TEST_DB_URL, connect_args={"check_same_thread": False})
+TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
</file context>
Integrates the pre-existing
CivicIntelligenceEngineandAdaptiveWeightswith the issue creation workflow inbackend/routers/issues.py.Updates
create_issueto use a dynamic duplicate search radius learned from historical patterns instead of a hardcoded 50m radius.Adds a comprehensive system test
backend/tests/test_civic_intelligence_system.pyto verify the daily refinement cycle, including snapshot generation and weight updates.Verifies existing unit tests pass.
Note: The core engine logic (
backend/civic_intelligence.py,backend/adaptive_weights.py) was already present in the codebase. This PR wires it up and adds system-level verification.PR created automatically by Jules for task 9064613965019712610 started by @RohanExploit
Summary by cubic
Integrates the CivicIntelligenceEngine with issue creation by replacing the fixed 50 m duplicate search radius with a learned, dynamic radius. Adds a system test for the daily refinement cycle and fixes Render deployment by setting PYTHONPATH to '.'.
New Features
Tests
Written for commit 1b17c82. Summary will update on new commits.
Summary by CodeRabbit
Tests
Refactor
Chores