Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions backend/requirements-render.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ firebase-admin
a2wsgi
python-jose[cryptography]
passlib[bcrypt]
bcrypt
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Check the requirements-render.txt file and surrounding context
cat -n backend/requirements-render.txt | head -30

Repository: RohanExploit/VishwaGuru

Length of output: 538


🏁 Script executed:

# Check if there are other requirements files with pinning patterns
fd -e txt -e txt -e toml -e cfg requirements | head -10

Repository: RohanExploit/VishwaGuru

Length of output: 150


🌐 Web query:

latest stable bcrypt version PyPI passlib compatibility 2025 2026

💡 Result:

  • Latest stable bcrypt on PyPI (as of Feb 25, 2026): bcrypt 5.0.0 (released Sep 25, 2025). [1]
  • Latest passlib on PyPI: passlib 1.7.4 (released Oct 8, 2020). [2]

Compatibility (Passlib + bcrypt, 2025–2026 reality):

  • bcrypt 4.1.0 was yanked on PyPI explicitly due to “Incompatibility with assumptions made by passlib.” [3]
  • A common break is that Passlib can’t detect bcrypt’s version after changes in bcrypt metadata; a documented workaround is to restrict bcrypt to <= 4.0.1 to keep Passlib working. [4]

Practical guidance: if you must stay on passlib==1.7.4, expect to pin bcrypt (often 4.0.1) rather than using the latest bcrypt (5.0.0). [2][4]

Sources

  1. PyPI: bcrypt release history (shows 5.0.0 on 2025-09-25) [1]
  2. PyPI: passlib upload date for 1.7.4 (2020-10-08) [2]
  3. PyPI: bcrypt 4.1.0 yanked reason mentions passlib incompatibility [3]
  4. Launchpad bug: Passlib cannot detect bcrypt version; workaround pins bcrypt <=4.0.1 [4]

Pin bcrypt to a compatible version with passlib[bcrypt].

Latest bcrypt versions (4.1.0+, 5.0.0) are incompatible with passlib 1.7.4—bcrypt 4.1.0 was yanked due to breaking passlib's version detection. Pin bcrypt<=4.0.1 to ensure runtime compatibility and reproducible deployments. Additionally, pin passlib[bcrypt] to avoid unexpected major version changes.

Proposed change
-passlib[bcrypt]
-bcrypt
+passlib[bcrypt]==1.7.4
+bcrypt==4.0.1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/requirements-render.txt` at line 17, Pin the bcrypt dependency to a
compatible version and pin passlib[bcrypt] to avoid runtime breakage: replace
the bare "bcrypt" entry with "bcrypt<=4.0.1" and add or update an explicit
"passlib[bcrypt]==1.7.4" entry in the requirements list so both packages are
version-locked and compatible at install time.

async_lru
SpeechRecognition
pydub
Expand Down
2 changes: 1 addition & 1 deletion backend/routers/issues.py
Original file line number Diff line number Diff line change
Expand Up @@ -662,7 +662,7 @@ def get_recent_issues(
cache_key = f"recent_issues_{limit}_{offset}"
cached_data = recent_issues_cache.get(cache_key)
if cached_data:
return JSONResponse(content=cached_data)
return cached_data
Comment on lines 663 to +665
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, find the cache implementation
fd -t f "cache.py" backend/ 2>/dev/null | head -20

Repository: RohanExploit/VishwaGuru

Length of output: 83


🏁 Script executed:

#!/bin/bash
# Check the lines in question from the issues.py file
cat -n backend/routers/issues.py | sed -n '670,715p'

Repository: RohanExploit/VishwaGuru

Length of output: 1825


🏁 Script executed:

#!/bin/bash
# Search for the cache.get() method definition
rg -n "def get" backend/ -A 5 --type py | grep -A 5 "cache"

Repository: RohanExploit/VishwaGuru

Length of output: 3137


🏁 Script executed:

#!/bin/bash
# Read the complete cache.py to understand get() return behavior
cat -n backend/cache.py

Repository: RohanExploit/VishwaGuru

Length of output: 6516


if cached_data: silently bypasses the cache for empty result sets.

When the DB legitimately returns zero issues, data = [] is stored in the cache at line 711, but if []: evaluates to False, so every subsequent request for that key re-hits the database. The get() method returns None for a cache miss and the actual cached value (including empty lists) on hit, so the guard should use an explicit None check.

🐛 Proposed fix
-    cached_data = recent_issues_cache.get(cache_key)
-    if cached_data:
-        return cached_data
+    cached_data = recent_issues_cache.get(cache_key)
+    if cached_data is not None:
+        return cached_data
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routers/issues.py` around lines 671 - 673, The cache check uses a
truthy test (if cached_data:) which treats empty lists as misses; change it to
an explicit None check by testing cached_data is not None after calling
recent_issues_cache.get(cache_key) so stored empty result sets (e.g., [] saved
later in the function) are honored and avoid needless DB hits; update the guard
around recent_issues_cache.get(cache_key) and keep the variables cached_data and
cache_key as-is.


# Fetch issues with pagination
# Optimized: Use column projection to fetch only needed fields
Expand Down
147 changes: 147 additions & 0 deletions backend/tests/test_optimizations.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
import sys
from unittest.mock import MagicMock

# Create dummy classes for types used in isinstance/issubclass checks
class MockTensor: pass

mock_torch = MagicMock()
mock_torch.Tensor = MockTensor
sys.modules["torch"] = mock_torch

sys.modules["google"] = MagicMock()
sys.modules["google.generativeai"] = MagicMock()
sys.modules["ultralytics"] = MagicMock()
sys.modules["transformers"] = MagicMock()
sys.modules["telegram"] = MagicMock()
sys.modules["telegram.ext"] = MagicMock()
sys.modules["speech_recognition"] = MagicMock()
sys.modules["a2wsgi"] = MagicMock()
sys.modules["firebase_functions"] = MagicMock()
sys.modules["googletrans"] = MagicMock()
sys.modules["langdetect"] = MagicMock()

import pytest
from unittest.mock import MagicMock, patch
from fastapi.responses import JSONResponse
# We need to ensure we import these AFTER mocking
from backend.routers.issues import get_recent_issues, create_issue
from backend.cache import recent_issues_cache
import os
import shutil

# Test get_recent_issues return type
def test_get_recent_issues_return_type():
# Mock cache
mock_data = [{"id": 1, "description": "test"}]
recent_issues_cache.set(mock_data, "recent_issues_10_0")

# Mock DB
db = MagicMock()

# Call function
response = get_recent_issues(limit=10, offset=0, db=db)

# Check that response is NOT a JSONResponse, but the data itself
assert not isinstance(response, JSONResponse)
assert response == mock_data
assert isinstance(response, list)

# Test create_issue cleanup
@pytest.mark.asyncio
async def test_create_issue_cleanup():
# Mock dependencies
request = MagicMock()
background_tasks = MagicMock()
db = MagicMock()

# Mock file upload
image = MagicMock()
image.filename = "test.jpg"

# Mock process_uploaded_image to return dummy data
with patch("backend.routers.issues.process_uploaded_image") as mock_process:
mock_process.return_value = (MagicMock(), b"fake_bytes")

# Mock save_processed_image to create a dummy file
with patch("backend.routers.issues.save_processed_image") as mock_save_image:
def side_effect(bytes_data, path):
# Create directory if needed
os.makedirs(os.path.dirname(path), exist_ok=True)
with open(path, "wb") as f:
f.write(bytes_data)
mock_save_image.side_effect = side_effect

# Mock save_issue_db to raise exception
with patch("backend.routers.issues.save_issue_db") as mock_save_db:
mock_save_db.side_effect = Exception("DB Error")

# Mock spatial utils
with patch("backend.routers.issues.get_bounding_box") as mock_bbox:
mock_bbox.return_value = (0, 0, 0, 0)
with patch("backend.routers.issues.find_nearby_issues") as mock_nearby:
mock_nearby.return_value = []

# Mock rag_service
with patch("backend.routers.issues.rag_service") as mock_rag:
mock_rag.retrieve.return_value = None

# Call create_issue
try:
await create_issue(
request=request,
background_tasks=background_tasks,
description="Test description length check",
category="Road",
language="en",
user_email="test@example.com",
latitude=10.0,
longitude=10.0,
location="Test Loc",
image=image,
db=db
)
except Exception as e:
assert "Failed to save issue to database" in str(e)
Comment on lines +89 to +104
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test doesn't verify that an exception is actually raised. If create_issue doesn't raise an exception (e.g., if error handling changes), the test will continue to line 107 and potentially pass even though the expected failure didn't occur. Consider adding an explicit failure after line 102 like assert False, "Expected create_issue to raise an exception" or using pytest.raises context manager to ensure an exception is raised.

Copilot uses AI. Check for mistakes.

# Check if file was cleaned up
args, _ = mock_save_image.call_args
file_path = args[1]

assert not os.path.exists(file_path), f"File {file_path} should have been deleted"
Comment on lines +50 to +110
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

check_upload_limits is not mocked — the test becomes flaky after 5 runs per hour and produces a misleading failure.

create_issue calls check_upload_limits("test@example.com", UPLOAD_LIMIT_PER_USER) (Line 64 of issues.py), which writes to the real user_upload_cache. After 5 executions within an hour check_upload_limits raises HTTPException(429). At that point:

  1. process_uploaded_image is never awaited, so image_path stays None.
  2. mock_save_image is never called, so mock_save_image.call_args is None.
  3. Line 107 (args, _ = mock_save_image.call_args) raises TypeError, completely masking the intended assertion.

Additionally, the side_effect at lines 67-72 creates the data/uploads/ directory and writes a real file; there is no teardown, leaving filesystem artifacts after each test run.

🐛 Proposed fix — mock `check_upload_limits` and add cleanup
+import shutil

 `@pytest.mark.asyncio`
 async def test_create_issue_cleanup():
     request = MagicMock()
     background_tasks = MagicMock()
     db = MagicMock()

     image = MagicMock()
     image.filename = "test.jpg"

+    upload_dir = "data/uploads"
     with patch("backend.routers.issues.process_uploaded_image") as mock_process:
         mock_process.return_value = (MagicMock(), b"fake_bytes")

         with patch("backend.routers.issues.save_processed_image") as mock_save_image:
             def side_effect(bytes_data, path):
-                os.makedirs(os.path.dirname(path), exist_ok=True)
+                os.makedirs(upload_dir, exist_ok=True)
                 with open(path, "wb") as f:
                     f.write(bytes_data)
             mock_save_image.side_effect = side_effect

             with patch("backend.routers.issues.save_issue_db") as mock_save_db:
                 mock_save_db.side_effect = Exception("DB Error")

                 with patch("backend.routers.issues.get_bounding_box") as mock_bbox:
                     mock_bbox.return_value = (0, 0, 0, 0)
                     with patch("backend.routers.issues.find_nearby_issues") as mock_nearby:
                         mock_nearby.return_value = []

                         with patch("backend.routers.issues.rag_service") as mock_rag:
+                            with patch("backend.routers.issues.check_upload_limits"):
                                 mock_rag.retrieve.return_value = None

-                                try:
-                                    await create_issue(...)
-                                except Exception as e:
-                                    assert "Failed to save issue to database" in str(e)
-
-                                args, _ = mock_save_image.call_args
-                                file_path = args[1]
-                                assert not os.path.exists(file_path), ...
+                                try:
+                                    await create_issue(
+                                        request=request,
+                                        background_tasks=background_tasks,
+                                        description="Test description length check",
+                                        category="Road",
+                                        language="en",
+                                        user_email="test@example.com",
+                                        latitude=10.0,
+                                        longitude=10.0,
+                                        location="Test Loc",
+                                        image=image,
+                                        db=db
+                                    )
+                                except Exception as e:
+                                    assert "Failed to save issue to database" in str(e)
+                                finally:
+                                    # Cleanup test artifacts
+                                    if os.path.exists(upload_dir):
+                                        shutil.rmtree(upload_dir)
+
+                                args, _ = mock_save_image.call_args
+                                file_path = args[1]
+                                assert not os.path.exists(file_path), \
+                                    f"File {file_path} should have been deleted"
🧰 Tools
🪛 Ruff (0.15.2)

[warning] 103-103: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_optimizations.py` around lines 50 - 110, The test is flaky
because create_issue calls check_upload_limits which mutates the real
user_upload_cache and can raise HTTPException(429) after repeated runs, and the
test writes real files without teardown; mock check_upload_limits (or
reset/patch user_upload_cache) so it returns success and cannot raise, ensure
you only inspect mock_save_image.call_args after confirming mock_save_image was
called (e.g., check mock_save_image.call_count > 0) to avoid TypeError, and
change the save_processed_image side_effect to write into a temporary directory
(or return a temp path) and remove any created file/dirs in a finally/teardown
block so filesystem artifacts are cleaned up; reference symbols:
check_upload_limits, create_issue, save_processed_image, save_issue_db,
user_upload_cache, and mock_save_image.call_args.


# Test get_recent_issues when not cached
def test_get_recent_issues_uncached():
# Clear cache
recent_issues_cache.clear()

# Mock DB
db = MagicMock()
# Mock query result - create a Mock object that behaves like the row
mock_row = MagicMock()
mock_row.id = 1
mock_row.description = "test"
mock_row.category = "Road"
mock_row.created_at = MagicMock()
mock_row.created_at.isoformat.return_value = "2023-01-01"
mock_row.image_path = "img.jpg"
mock_row.status = "open"
mock_row.upvotes = 0
mock_row.location = "Loc"
mock_row.latitude = 10.0
mock_row.longitude = 10.0

# Setup chain of calls: db.query(...).order_by(...).offset(...).limit(...).all()
# Note: query() returns a Query object.
mock_query = MagicMock()
db.query.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.offset.return_value = mock_query
mock_query.limit.return_value = mock_query
mock_query.all.return_value = [mock_row]

# Call function
response = get_recent_issues(limit=10, offset=0, db=db)

assert isinstance(response, list)
assert len(response) == 1
assert response[0]["id"] == 1
54 changes: 11 additions & 43 deletions backend/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -159,54 +159,22 @@ def process_uploaded_image_sync(file: UploadFile) -> tuple[Image.Image, bytes]:
Synchronously validate, resize, and strip EXIF from uploaded image.
Returns a tuple of (PIL Image, image bytes).
"""
# Check file size
file.file.seek(0, 2)
file_size = file.file.tell()
file.file.seek(0)

if file_size > MAX_FILE_SIZE:
raise HTTPException(
status_code=413,
detail=f"File too large. Maximum size allowed is {MAX_FILE_SIZE // (1024*1024)}MB"
)

# Check MIME type if magic is available
if HAS_MAGIC:
try:
file_content = file.file.read(1024)
file.file.seek(0)
detected_mime = magic.from_buffer(file_content, mime=True)

if detected_mime not in ALLOWED_MIME_TYPES:
raise HTTPException(
status_code=400,
detail=f"Invalid file type. Only image files are allowed. Detected: {detected_mime}"
)
except Exception as e:
logger.error(f"Magic check failed: {e}")
pass
# Use existing validation logic (which handles size limits and basic validation)
img = _validate_uploaded_file_sync(file)
Comment on lines +162 to +163
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unguarded dereference if _validate_uploaded_file_sync ever returns None.

The return type of _validate_uploaded_file_sync is Optional[Image.Image]. While the current implementation always returns a valid image or raises, the type contract permits None. Line 167 (Image.new(img.mode, img.size)) would immediately raise AttributeError with no actionable HTTP error for the caller.

🛡️ Proposed defensive guard
     img = _validate_uploaded_file_sync(file)
+    if img is None:
+        raise HTTPException(status_code=400, detail="Failed to process image file.")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Use existing validation logic (which handles size limits and basic validation)
img = _validate_uploaded_file_sync(file)
# Use existing validation logic (which handles size limits and basic validation)
img = _validate_uploaded_file_sync(file)
if img is None:
raise HTTPException(status_code=400, detail="Failed to process image file.")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/utils.py` around lines 162 - 163, The code calls
_validate_uploaded_file_sync(file) and immediately dereferences its result (used
in Image.new(img.mode, img.size)), but _validate_uploaded_file_sync has
Optional[Image.Image] return type; add a defensive guard after the call to check
if img is None and raise a clear HTTP/client error (e.g., raise
HTTPException(status_code=400, detail="Invalid or empty uploaded image") or
ValueError that your handler maps to a 400) before using img, so the failure
produces an actionable HTTP response instead of an AttributeError.


try:
img = Image.open(file.file)
original_format = img.format

# Resize if needed
if img.width > 1024 or img.height > 1024:
ratio = min(1024 / img.width, 1024 / img.height)
new_width = int(img.width * ratio)
new_height = int(img.height * ratio)
img = img.resize((new_width, new_height), Image.Resampling.BILINEAR)

# Strip EXIF
img_no_exif = Image.new(img.mode, img.size)
img_no_exif.paste(img)
Comment on lines +163 to 168
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing EXIF orientation handling before stripping metadata. Images with EXIF orientation tags (commonly from mobile phones) will display incorrectly after EXIF stripping. You should apply ImageOps.exif_transpose(img) immediately after calling _validate_uploaded_file_sync and before creating the new image without EXIF. This ensures the image pixels are rotated correctly before removing the orientation metadata.

Copilot uses AI. Check for mistakes.

# Save to BytesIO
output = io.BytesIO()
# Preserve format or default to JPEG (handling mode compatibility)
# JPEG doesn't support RGBA, so use PNG for RGBA if format not specified
if original_format:
fmt = original_format
# Preserve format or default to JPEG/PNG based on mode
# _validate_uploaded_file_sync doesn't return the format explicitly if resized,
# but img.format is None if resized.
# If not resized, img.format is available.
if img.format:
fmt = img.format
else:
fmt = 'PNG' if img.mode == 'RGBA' else 'JPEG'
Comment on lines +176 to 179
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Defaulting to JPEG when img.format is missing breaks palette-mode images (e.g., GIF/PNG with mode "P") after resize, because PIL can’t save "P" as JPEG. This causes valid uploads to fail. Treat non-JPEG-compatible modes as PNG (or convert to RGB) when img.format is unavailable.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/utils.py, line 176:

<comment>Defaulting to JPEG when img.format is missing breaks palette-mode images (e.g., GIF/PNG with mode "P") after resize, because PIL can’t save "P" as JPEG. This causes valid uploads to fail. Treat non-JPEG-compatible modes as PNG (or convert to RGB) when img.format is unavailable.</comment>

<file context>
@@ -159,54 +159,22 @@ def process_uploaded_image_sync(file: UploadFile) -> tuple[Image.Image, bytes]:
+        # _validate_uploaded_file_sync doesn't return the format explicitly if resized,
+        # but img.format is None if resized.
+        # If not resized, img.format is available.
+        if img.format:
+            fmt = img.format
         else:
</file context>
Suggested change
if img.format:
fmt = img.format
else:
fmt = 'PNG' if img.mode == 'RGBA' else 'JPEG'
if img.format:
fmt = img.format
else:
fmt = 'PNG' if img.mode in ('RGBA', 'P', 'LA') else 'JPEG'
Fix with Cubic


Expand All @@ -215,11 +183,11 @@ def process_uploaded_image_sync(file: UploadFile) -> tuple[Image.Image, bytes]:

return img_no_exif, img_bytes

except Exception as pil_error:
logger.error(f"PIL processing failed: {pil_error}")
except Exception as e:
logger.error(f"Image processing failed: {e}")
raise HTTPException(
status_code=400,
detail="Invalid image file."
detail="Failed to process image file."
)

async def process_uploaded_image(file: UploadFile) -> tuple[Image.Image, bytes]:
Expand Down