Conversation
Rewrites the AI assistant reference to cover all major subsystems discovered in the current codebase: - Full directory tree with purpose annotations - Plugin system lifecycle, BasePlugin contract, and file structure - Vegas scroll mode and VegasDisplayMode enum - Web interface API endpoints and response format - Configuration management (main config, secrets, hot-reload) - Logging conventions using get_logger() - Testing patterns with PluginTestCase and available mocks - Coding standards, naming conventions, and common pitfalls - Development workflow for new plugins and emulator setup https://claude.ai/code/session_01Fd5xaf9kpQMw3Pk5R9dUdt
Update claude.md
## Weather plugin — API data never fetched after config saved via web UI
WeatherPlugin did not override on_config_change(). The base class only
updates self.config and self.enabled, so instance attributes set in
__init__ (self.api_key, self.location, self.units, self.update_interval,
self.show_current/hourly/daily, self.modes) were never refreshed.
On every subsequent update() call the stale self.api_key still equalled
the sentinel 'YOUR_OPENWEATHERMAP_API_KEY', causing an early return with
a warning and no API request ever being made.
Fix: add on_config_change() that re-reads all config-derived attributes,
rebuilds self.modes, resets the update timer and error backoff so the
new API key / location is used on the next update cycle, and clears the
layout cache.
## Weather plugin — API key returned in plaintext to the browser
GET /api/v3/plugins/config calls config_manager.load_config(), which
deep-merges config_secrets.json into the main config before returning.
The plugin config dict therefore contains the live API key value and
that value was returned to the browser unchanged.
The x-secret:true annotation in config_schema.json was only honoured on
the POST (save) path, not on the GET path.
Fix: after loading plugin_config in get_plugin_config(), load the plugin
schema and recursively blank out any field marked x-secret:true before
the response is serialised (replace value with "").
Also fix the matching save path: add _drop_empty_secrets() in
save_plugin_config() so that if the user re-submits a form where an
x-secret field is blank (because it was masked on GET), the empty string
does not overwrite the stored secret.
## Stocks plugin — chart toggle has no effect
StockDisplayRenderer.__init__() caches
self.toggle_chart = config.get('display', {}).get('toggle_chart', True)
at construction time. StockTickerPlugin had no on_config_change()
override, so config changes from the web UI only updated the base class
self.config dict. Neither config_manager.toggle_chart nor
display_renderer.toggle_chart was ever refreshed.
Even reload_config() only updated display_renderer.config (the dict
reference) but not the cached boolean attribute, so the chart always
rendered as if toggle_chart=True.
Fix:
- Add on_config_change() to StockTickerPlugin that updates
config_manager.plugin_config, re-runs config_manager._load_config() to
reparse all attributes, syncs display_renderer.toggle_chart from the
freshly parsed value, updates all other component refs, and clears the
scroll cache so the next display() re-renders with the new layout.
- Fix reload_config() to also sync display_renderer.toggle_chart after
calling config_manager.reload_config().
https://claude.ai/code/session_01Fd5xaf9kpQMw3Pk5R9dUdt
…internal plugin actions
- Add server-side rendering for x-widget: "custom-html" in plugin config
template, enabling plugins like of-the-day to load custom file manager UIs
- Hide internal web_ui_actions (those without a title) from the Plugin
Actions section - they're API-only actions used by custom widgets
- Mask x-secret fields in GET plugin config and skip empty secrets on POST
to prevent exposing/overwriting API keys
- Fix config_secrets template to use plugin_id key ("ledmatrix-weather")
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…2d' into Dev # Conflicts: # web_interface/blueprints/api_v3.py
… actions The plugin action endpoint used a wrapper Python script that embedded JSON params as a Python literal in source code. This caused failures with large payloads or special characters. Now pipes params directly via stdin. Also fixed: shadowed logger variable causing UnboundLocalError on action failures, and added debug logging for failed actions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ately Upload button and edit modal had two separate issues: - Edit modal closed immediately on open because JSONEditor injects toolbar buttons without type="button" into the DOM; when the modal was inside the HTMX plugin config form, clicking those buttons accidentally submitted the form. Fix: move #edit-modal and #delete-modal to document.body on widget init so they are outside the form entirely. An initModals() cleanup pass removes stale modals on re-injection. - Added event.stopPropagation() to the file-manager container to prevent any stray clicks from reaching the HTMX form. Day-number key validation (1-365) is preserved in upload_file.py and save_file.py — it is required for the display logic to resolve the correct entry for the current day of year. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
toggle_category.py was returning HTTP 400 "Category not found in config" for any JSON file that existed on disk but had no entry in config/config.json under of-the-day.categories (e.g. manually-placed files or files added before the upload workflow ran add_category_to_config). list_files.py already displays such files with enabled=True as a default, so the toggle switch was visible but non-functional. Fix: if the category is missing from config when a toggle arrives, auto-register it (enabled=True, data_file, display_name derived from filename) and add it to category_order — matching what upload_file.py does via update_config.py — then apply the requested enabled state. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Fix upload button and edit modal (modal moved to body, out of HTMX form) - Restore 1-365 day-number key validation in upload_file.py and save_file.py - Fix toggle HTTP 400: auto-register untracked categories on first toggle
Leftover from the wrapper-script removal — wrapper_path no longer exists at this point in the code path so the os.unlink() was a no-op at best and a NameError at worst. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
All upstream changes merged cleanly with no conflicts. Our existing fixes (secret masking, plugin action stdin, stale wrapper_path cleanup, of-the-day web UI) are fully preserved. Upstream changes included: - feat: Starlark Apps integration with schema-driven config + security hardening - feat(plugins): add sorting, filtering, and fix Update All button - fix(plugins): prevent module collision between shared module names - fix(schedule): hot-reload config + normalize per-day mode variant - fix(odds): use update_interval as cache TTL, fix live game cache refresh - fix(web): handle dotted keys in schema/config path helpers - fix(web): wire up Check & Update All plugins button - fix(web): use window.installedPlugins for bulk update button - fix(starlark): fix Python 3.13 importlib.reload() incompatibility - fix(cache): move odds key check in get_data_type_from_key - perf(store): cache GitHub API calls, eliminate redundant requests - docs: Waveshare display info in README Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…es config _load_data_files() only iterated self.categories.items(), so any category listed in category_order but absent from the categories dict (e.g. word_of_the_day, bible_verse_of_the_day) was silently skipped, leaving current_items empty and causing the plugin to show "No Data". Fix: build a unified load list that merges self.categories with any entries in self.category_order that lack a config entry, auto-deriving the data_file path as of_the_day/<category_name>.json. Explicit config entries always take priority; auto-discovered ones default to enabled=True. Also registered word_of_the_day and bible_verse_of_the_day in config/config.json so the categories dict is consistent with category_order (config.json is not committed — fixed on device). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds a new "Of The Day" plugin with category-based daily content, CLI/web scripts for managing JSON category files, a file-manager web UI with JSONEditor, extensive documentation overhaul, secret-masking and raw-save endpoints in the API, font loading additions, and multiple tests/utility updates. Changes
Sequence DiagramssequenceDiagram
participant User as User/UI
participant Plugin as OfTheDayPlugin
participant FS as File System / Data Files
participant Display as DisplayManager
participant Cache as CacheManager
User->>Plugin: initialize(config, managers)
Plugin->>Plugin: _register_fonts()
Plugin->>FS: _load_data_files()
FS-->>Plugin: file paths & json data
Plugin->>Cache: _load_todays_items()
Cache-->>Plugin: today's items
User->>Plugin: update()
Plugin->>Plugin: check day change
alt day changed
Plugin->>FS: reload today's items
FS-->>Plugin: updated items
end
User->>Plugin: display(force_clear)
Plugin->>Plugin: determine current category & mode
alt title screen
Plugin->>Display: _display_title() -> draw operations
else content screen
Plugin->>Display: _display_content() -> draw operations
end
Display-->>Plugin: render complete
User->>Plugin: on_config_change(new_config)
Plugin->>Plugin: reload categories & reset state
sequenceDiagram
participant WebUser as Web User
participant UI as File Manager UI
participant API as API v3
participant Script as Plugin Script
participant FS as File System
WebUser->>UI: open file manager
UI->>API: POST action=list-files
API->>Script: run list-files
Script->>FS: read of_the_day directory
FS-->>Script: file list
Script-->>API: files JSON
API-->>UI: render file grid
WebUser->>UI: upload JSON
UI->>API: POST action=upload-file (filename, content)
API->>Script: run upload-file
Script->>FS: write file, update config
FS-->>Script: success
Script-->>API: success
API-->>UI: refresh list
WebUser->>UI: edit & save
UI->>API: POST action=get-file / save-file
API->>Script: run get/save scripts
Script->>FS: read/write file
FS-->>Script: success
Script-->>API: status
API-->>UI: update UI
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
web_interface/blueprints/api_v3.py (1)
4743-4784:⚠️ Potential issue | 🟡 MinorTreat whitespace-only secret inputs as empty.
_drop_empty_secrets/_filter_empty_secretscurrently allow whitespace-only values to pass through, which can still overwrite stored secrets. Consider trimming strings when filtering.🔧 Suggested fix
- elif v != '': - result[k] = v + elif not (isinstance(v, str) and v.strip() == ''): + result[k] = v @@ - elif v is not None and v != '': - filtered[k] = v + elif v is not None and not (isinstance(v, str) and v.strip() == ''): + filtered[k] = v🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web_interface/blueprints/api_v3.py` around lines 4743 - 4784, The secret-filtering helpers (_drop_empty_secrets and _filter_empty_secrets) currently treat whitespace-only strings as non-empty; update both functions to trim string values before checking emptiness by using isinstance(v, str) and v.strip() != '' so any string that is only whitespace is treated as empty and omitted, while leaving non-string values (None, numbers, dicts) unchanged and still recursing into nested dicts as before.
🟡 Minor comments (13)
plugins/stocks/display_renderer.py-398-409 (1)
398-409:⚠️ Potential issue | 🟡 MinorElement gap formula seems misaligned with the new single-image-per-stock design.
Line 409 adds
element_gap * (len(stock_displays) * 2 - 1)to the total width. The comment references old code where each stock had multiple sub-elements (logo, text), but in this refactored version each stock is a single pre-composed image. The* 2multiplier likely over-allocates horizontal space, creating larger gaps than intended. Consider whether this should simply beelement_gap * (len(stock_displays) - 1)or if the old formula is intentionally preserved for visual parity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/display_renderer.py` around lines 398 - 409, The total width computation overcounts element gaps because element_gap is multiplied by (len(stock_displays) * 2 - 1), which assumes two sub-elements per stock; update the formula in display_renderer.py to reflect the new single-image-per-stock design by replacing the multiplier with (len(stock_displays) - 1) (i.e., use element_gap * (len(stock_displays) - 1)) so total_width sums int(display.width) for each display, adds stock_gap between displays, and adds a single element_gap between adjacent displays (referencing stock_gap, element_gap, total_width, and stock_displays).plugins/stocks/display_renderer.py-364-378 (1)
364-378:⚠️ Potential issue | 🟡 MinorHardcoded y-positions (5, 15, 25) ignore display height.
Unlike
create_stock_displaywhich dynamically centers text vertically, the static display hardcodes y-coordinates at 5, 15, and 25 pixels. This assumes a ~32px display height and won't adapt to other display sizes. Since the coding guidelines require supporting different display sizes by checkingdisplay_manager.widthanddisplay_manager.height, consider computing these positions relative toself.display_height.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/display_renderer.py` around lines 364 - 378, The code currently uses hardcoded y positions (5,15,25) to draw symbol_text, price_text, and change_text; update the drawing to compute vertical positions from the display height (self.display_height or display_manager.height) and the measured text bboxes so it adapts to different display sizes. Compute each text height from their bbox (e.g., symbol_height = int(symbol_bbox[3] - symbol_bbox[1]) etc.), choose a small spacing value, calculate total_height = sum(heights of present lines) + spacing*(n-1), set start_y = int((self.display_height - total_height)/2), then draw symbol at (symbol_x, start_y), price at (price_x, start_y + symbol_height + spacing), and change (if present) at the next offset; update the block that references symbol_bbox, price_bbox, change_bbox, symbol_text, price_text, change_text, and symbol_font/price_font/change_font accordingly.CLAUDE.md-11-126 (1)
11-126:⚠️ Potential issue | 🟡 Minormarkdownlint: add fenced-code languages and blank lines around the table.
MD040 and MD058 are flagged. Please add language identifiers to fenced blocks (e.g.,
text,bash,json,python) and blank lines before/after the API endpoints table.Also applies to: 205-233, 246-284, 366-378
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 11 - 126, Update the fenced code blocks in CLAUDE.md by adding language identifiers (e.g., ```text for the repo tree, ```bash / ```python / ```json where appropriate) to satisfy MD040, and ensure blank lines are added immediately before and after the API endpoints table to satisfy MD058; specifically update the large repo-tree fence and the other fenced blocks referenced (lines ~205-233, ~246-284, ~366-378) and insert a single blank line above and below the table labeled "API endpoints" so markdownlint stops flagging MD040/MD058.plugins/weather/manager.py-318-323 (1)
318-323:⚠️ Potential issue | 🟡 MinorCache key should include location/units to avoid stale config.
On config change,
_fetch_weather()still pulls the old cached data because the key is static. Scope the key to the current location/units.♻️ Cache key scoping
- cache_key = 'weather' + cache_key = f"{self.plugin_id}:{self.location.get('city')}:{self.location.get('state')}:{self.location.get('country')}:{self.units}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/weather/manager.py` around lines 318 - 323, The cache key in _fetch_weather() is static and causes stale results when config like location or units change; update the cache key construction in the _fetch_weather method to include the current configuration (e.g., include self.location and self.units or their normalized forms) so lookups and subsequent cache sets use the same scoped key (e.g., "weather:{location}:{units}"); ensure any place that writes to the cache (the cache_manager.set call) uses this same composed key and normalize/escape values to avoid invalid keys.CLAUDE.md-188-193 (1)
188-193:⚠️ Potential issue | 🟡 MinorDocs still reference the old secrets key.
The template now uses
ledmatrix-weather, but the docs still sayweather.api_key. This will mislead setup.📘 Doc fix
- - `weather.api_key` — OpenWeatherMap + - `ledmatrix-weather.api_key` — OpenWeatherMap🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 188 - 193, The docs still reference the old secrets key `weather.api_key`; update the Secrets Config section for `config/config_secrets.json` to match the template's new key by replacing `weather.api_key` with the template's `ledmatrix-weather` key (e.g., `ledmatrix-weather.api_key` or the exact key name used in the template) and adjust the description to mention OpenWeatherMap under `ledmatrix-weather`; ensure any other mentions of `weather.api_key` in that section are renamed to `ledmatrix-weather` so the docs match the template.plugin-repos/of-the-day/scripts/save_file.py-12-74 (1)
12-74:⚠️ Potential issue | 🟡 MinorRestrict saves to JSON files and ensure the data dir exists.
save_file.pyallows arbitrary extensions and assumes the data directory exists. Enforcing.jsonand creating the directory prevents unexpected writes and avoids runtime failures.🔧 Suggested fix
plugin_dir = Path(__file__).parent.parent data_dir = plugin_dir / 'of_the_day' +data_dir.mkdir(parents=True, exist_ok=True) @@ # Security: ensure filename doesn't contain path traversal if '..' in filename or '/' in filename or '\\' in filename: print(json.dumps({ 'status': 'error', 'message': 'Invalid filename' })) sys.exit(1) + + if not filename.endswith('.json'): + print(json.dumps({ + 'status': 'error', + 'message': 'File must be a JSON file (.json)' + })) + sys.exit(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/scripts/save_file.py` around lines 12 - 74, The script currently allows arbitrary file extensions and assumes the data directory exists; update the logic around filename and data_dir to enforce JSON-only saves and create the directory if missing: after reading filename, validate it endswith ".json" (reject otherwise) in addition to the existing traversal checks, and before opening file_path use Path.mkdir(parents=True, exist_ok=True) on data_dir (derived from plugin_dir / 'of_the_day') so the folder is created if absent; keep all other validations (content JSON, key ranges) intact and continue writing to file_path.plugins/weather/manager.py-18-53 (1)
18-53:⚠️ Potential issue | 🟡 MinorRemove unused module-level logging setup.
The module-level
logger = logging.getLogger(__name__)is never used; the class inheritsself.loggerfromBasePlugin, which is already initialized viaget_logger(). Remove the unusedimport loggingand logger assignment.Cleanup
-import logging @@ -logger = logging.getLogger(__name__)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/weather/manager.py` around lines 18 - 53, Remove the unused module-level logger: delete the top-level "import logging" and the "logger = logging.getLogger(__name__)" assignment in manager.py; verify there are no remaining references to that module-level "logger" (the class should use the inherited self.logger from BasePlugin via get_logger()), and run a quick lint or tests to ensure no unused-import or undefined-name errors remain.plugins/weather/manager.py-484-544 (1)
484-544:⚠️ Potential issue | 🟡 MinorNormalize
display()signature to matchBasePluginand use keyword arguments for clarity.The current signature
display(self, display_mode: str = None, force_clear: bool = False)deviates fromBasePlugin.display(self, force_clear: bool = False). Although external callers use keyword arguments (so no runtime failure occurs), internal compatibility methods callself.display('weather', force_clear)positionally, which is fragile. Reorder parameters to match the base class and make internal calls explicit:Parameter order fix
- def display(self, display_mode: str = None, force_clear: bool = False) -> None: + def display(self, force_clear: bool = False, display_mode: Optional[str] = None) -> None: @@ """Display current weather (compatibility method for display controller).""" - self.display('weather', force_clear) + self.display(force_clear=force_clear, display_mode='weather') @@ """Display hourly forecast (compatibility method for display controller).""" - self.display('hourly_forecast', force_clear) + self.display(force_clear=force_clear, display_mode='hourly_forecast') @@ """Display daily forecast (compatibility method for display controller).""" - self.display('daily_forecast', force_clear) + self.display(force_clear=force_clear, display_mode='daily_forecast')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/weather/manager.py` around lines 484 - 544, The display() signature must match BasePlugin.display(self, force_clear: bool = False): reorder parameters to def display(self, force_clear: bool = False, display_mode: str = None) (or accept force_clear first and display_mode as keyword-only) and update the docstring accordingly; then change any internal positional calls like self.display('weather', force_clear) to use keywords (e.g., self.display(display_mode='weather', force_clear=True/False)) so callers of BasePlugin.display and internal compatibility callers both work; adjust tests/call sites that invoke display positionally to use keyword arguments and keep logic in display (current_mode handling, _on_mode_changed, _display_current_weather/_display_hourly_forecast/_display_daily_forecast) unchanged.plugin-repos/of-the-day/scripts/toggle_category.py-86-94 (1)
86-94:⚠️ Potential issue | 🟡 MinorAvoid catching broad
Exceptionwhen saving config.Line 90 should catch specific exceptions (e.g.,
OSError,TypeError) so real bugs aren’t masked.
As per coding guidelines, Catch specific exceptions, not bareexcept:statements.🧯 Narrowed exception handling
-try: +try: config_file.parent.mkdir(parents=True, exist_ok=True) with open(config_file, 'w', encoding='utf-8') as f: json.dump(config, f, indent=2, ensure_ascii=False) -except Exception as e: +except (OSError, TypeError) as e: print(json.dumps({ 'status': 'error', 'message': f'Failed to save config: {str(e)}' }))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/scripts/toggle_category.py` around lines 86 - 94, Replace the broad except Exception in the config save block that wraps config_file.parent.mkdir and json.dump with narrow exception handlers: catch OSError for filesystem/mkdir/open failures and TypeError for json.dump serialization errors (e.g., non-serializable objects), and handle each by printing the same error JSON using the existing print payload; locate the try that references config_file, config and json.dump and change the single broad except to except (OSError, TypeError) as e (or separate except clauses) so unrelated errors aren’t masked.web_interface/blueprints/api_v3.py-2604-2608 (1)
2604-2608:⚠️ Potential issue | 🟡 MinorInclude
live_priorityin the default plugin config.The default config returned when no plugin config exists should include core fields required by config/config.json to avoid creating partial configs on save.
🔧 Suggested fix
- plugin_config = { - 'enabled': True, - 'display_duration': 15 - } + plugin_config = { + 'enabled': True, + 'display_duration': 15, + 'live_priority': False + }Based on learnings: Plugin configuration in
config/config.jsonmust include fields: enabled, display_duration, live_priority, transition configuration, and plugin-specific settings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web_interface/blueprints/api_v3.py` around lines 2604 - 2608, The default plugin_config block is missing the required live_priority field; update the code that sets plugin_config (the dict created when plugin_config is falsy in api_v3.py) to include 'live_priority' with a sensible default (e.g., 0) so the returned config matches the core fields expected by config/config.json and prevents partial configs on save; ensure the change is applied where plugin_config is initialized so other consumers (save/load) see the complete structure.web_interface/blueprints/api_v3.py-1021-1034 (1)
1021-1034:⚠️ Potential issue | 🟡 MinorMask non‑string secret values too.
Right now only non-placeholder strings are masked. If a secret is stored as a non-string (e.g., numeric token), it will be returned in plaintext. Consider masking any non-empty, non-placeholder value.
🔧 Suggested fix
- elif isinstance(v, str) and v and not v.startswith('YOUR_'): - masked[k] = '••••••••' - else: - masked[k] = v + elif v not in (None, '') and not (isinstance(v, str) and v.startswith('YOUR_')): + masked[k] = '••••••••' + else: + masked[k] = v🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web_interface/blueprints/api_v3.py` around lines 1021 - 1034, The _mask_values function only masks non-placeholder string values, leaving non-string secrets (e.g., numeric tokens) exposed; update _mask_values to recursively handle dicts and also mask any non-empty, non-placeholder value regardless of type by returning '••••••••' for: (a) strings that are non-empty and not starting with 'YOUR_', and (b) non-string values that are not None/empty (e.g., numbers, booleans, lists) — keep recursion for nested dicts and apply the masked value to masked_config before jsonify({'status': 'success', 'data': masked_config}).plugin-repos/of-the-day/manager.py-93-120 (1)
93-120:⚠️ Potential issue | 🟡 MinorAdd type hints to helper methods.
Private helpers (e.g.,
_register_fonts) lack return annotations; the project guidelines require parameter and return type hints across functions.As per coding guidelines, "Use type hints for function parameters and return values".🛠️ Example adjustment
- def _register_fonts(self): + def _register_fonts(self) -> None:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/manager.py` around lines 93 - 120, Update the _register_fonts method signature to include type hints by annotating the return type as None (e.g., def _register_fonts(self) -> None:) and, if this class uses type-annotated attributes, ensure plugin_manager has an appropriate type (e.g., Optional[PluginManager]) and font_manager is typed (e.g., FontManager) in the class attribute declarations or __init__; keep the body unchanged but preserve existing exception handling and references to self.plugin_id, self.title_color, self.content_color, and self.logger.plugin-repos/of-the-day/manager.py-678-710 (1)
678-710:⚠️ Potential issue | 🟡 MinorClear display before rendering no-data/error screens and avoid bare
except.These screens should clear the display buffer before drawing, and font-load errors should be caught explicitly to avoid masking failures.
Based on learnings, "Clear display before rendering by calling `display_manager.clear()`". As per coding guidelines, "Catch specific exceptions, not bare `except:` statements".🛠️ Suggested fix
def _display_no_data(self): """Display message when no data is available.""" + self.display_manager.clear() img = Image.new('RGB', (self.display_manager.width, self.display_manager.height), self.background_color) @@ - try: - font = ImageFont.truetype('assets/fonts/4x6-font.ttf', 8) - except: - font = ImageFont.load_default() + try: + font = ImageFont.truetype('assets/fonts/4x6-font.ttf', 8) + except OSError as e: + self.logger.warning(f"Failed to load 4x6 font: {e}, using fallback") + font = ImageFont.load_default() @@ def _display_error(self): """Display error message.""" + self.display_manager.clear() img = Image.new('RGB', (self.display_manager.width, self.display_manager.height), self.background_color) @@ - try: - font = ImageFont.truetype('assets/fonts/4x6-font.ttf', 8) - except: - font = ImageFont.load_default() + try: + font = ImageFont.truetype('assets/fonts/4x6-font.ttf', 8) + except OSError as e: + self.logger.warning(f"Failed to load 4x6 font: {e}, using fallback") + font = ImageFont.load_default()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/manager.py` around lines 678 - 710, In _display_no_data and _display_error, clear the display before drawing by calling display_manager.clear(), and replace the bare except when loading the font with a specific exception handler (catch OSError/IOError from ImageFont.truetype) so font-loading failures aren’t masked; keep the fallback to ImageFont.load_default(), render the text, then assign self.display_manager.image = img.copy() and call self.display_manager.update_display() as before.
🧹 Nitpick comments (11)
plugins/stocks/manager.py (5)
87-107: Moveimport tracebackto the top of the file.The
import tracebackon line 105 (and again on line 198) is inside exception handlers. Module-level imports are preferred for clarity and to avoid repeated import overhead. Also,self.logger.exception()would include the traceback automatically, eliminating the need fortraceback.format_exc()entirely.Suggested fix
# At top of file: import time +import traceback from typing import Dict, Any, Optional ... except Exception as e: - import traceback - self.logger.error("Error updating stock/crypto data: %s", e) - self.logger.debug("Traceback: %s", traceback.format_exc()) + self.logger.exception("Error updating stock/crypto data")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/manager.py` around lines 87 - 107, Move the inline "import traceback" out of exception handlers and place it at the top-level imports, and replace the manual traceback logging with logger.exception to automatically include traceback details; specifically, in the update method of the class (method name: update) remove "import traceback" and the self.logger.debug("Traceback: %s", traceback.format_exc()) call and instead call self.logger.exception("Error updating stock/crypto data: %s", e) (also update the other exception handler that currently imports traceback on line ~198 to the same pattern) so all traceback handling is done via module-level import and logger.exception.
203-210: Calling private method_create_error_display()on the renderer.Line 206 accesses
self.display_renderer._create_error_display(). This breaks encapsulation. As noted in the renderer review, this method should be made public (create_error_display()).Additionally,
_show_error_statecatches bareException(line 209) — narrowing toOSErroror at minimum logging withself.logger.exception()would preserve the traceback.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/manager.py` around lines 203 - 210, The _show_error_state method is calling the renderer's private method _create_error_display and catching a bare Exception; change the call to the public create_error_display on self.display_renderer (replace self.display_renderer._create_error_display() with self.display_renderer.create_error_display()) and tighten error handling by catching OSError (or the more specific exceptions raised by display operations) and/or use self.logger.exception(...) instead of logger.error to preserve the traceback when the paste/update_display steps fail; update references in _show_error_state accordingly.
376-401:on_config_changeaccesses private_load_config()and directly mutates component internals.Line 382 calls
self.config_manager._load_config()— a private method. Line 386 directly setsself.display_renderer.config, line 390 setsself.chart_renderer.config, and line 391 setsself.data_fetcher.config. This tightly couples the plugin to the internal structure of all its components.Consider adding public
on_config_change(new_config)methods to each component so they can manage their own re-initialization, keeping the coupling at the interface level.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/manager.py` around lines 376 - 401, Replace direct manipulation of component internals in on_config_change by invoking public reinitialization hooks on each component: avoid calling the private self.config_manager._load_config() and instead call a public method such as self.config_manager.on_config_change(new_config); replace assignments to self.display_renderer.config and self.display_renderer.toggle_chart with self.display_renderer.on_config_change(new_config) (so the renderer can update its cached scalars), call self.chart_renderer.on_config_change(new_config) rather than setting self.chart_renderer.config, and call self.data_fetcher.on_config_change(self.config_manager.plugin_config) instead of mutating its config directly; keep scroll_helper.clear_cache() and the logger.info(...) call but ensure each component implements the suggested on_config_change(new_config) public API to encapsulate internal updates.
212-228: Use explicitOptional[str]fordisplay_modeparameter.Per PEP 484 (and Ruff RUF013), implicit
Optionalviastr = Noneis prohibited. UseOptional[str] = Noneorstr | None = None.Suggested fix
- def get_cycle_duration(self, display_mode: str = None) -> Optional[float]: + def get_cycle_duration(self, display_mode: Optional[str] = None) -> Optional[float]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/manager.py` around lines 212 - 228, The get_cycle_duration function's display_mode parameter uses an implicit Optional via "display_mode: str = None" which violates PEP 484 / Ruff RUF013; update the signature in manager.py for get_cycle_duration to use an explicit nullable type such as "display_mode: Optional[str] = None" (and add the corresponding typing import if missing) or the modern union form "display_mode: str | None = None" so the annotation is explicit and linter-compliant.
149-154: Remove unnecessaryhasattrchecks onscroll_helper— these attributes are guaranteed by ScrollHelper's stable interface.ScrollHelper is a well-defined class initialized with all required attributes (
cached_image,scroll_position,calculated_duration) and methods (is_scroll_complete(),reset_scroll()) in its__init__. Thehasattrguards on lines 100, 150, 261, 313, and 330 are unnecessary defensive programming. Since the plugin always instantiates aScrollHelperobject directly, these attributes are guaranteed to exist. Remove thehasattrchecks and trust the API contract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/manager.py` around lines 149 - 154, Remove the unnecessary hasattr guards around ScrollHelper usage and call its API directly: replace the hasattr check in the block using self.scroll_helper with a direct call to self.scroll_helper.is_scroll_complete() and directly read self.scroll_helper.scroll_position; likewise remove other hasattr checks referencing ScrollHelper (e.g., the ones around is_scroll_complete, scroll_position, reset_scroll at the other locations mentioned) so the code trusts ScrollHelper's stable interface (symbols: self.scroll_helper, is_scroll_complete(), scroll_position, reset_scroll(), _has_scrolled).plugins/stocks/display_renderer.py (5)
482-492: Unreachable branch:price_range == 0check on line 488 is dead code.Lines 483–486 already handle the case where
price_range < 0.01by adjusting it to0.02. After that adjustment,price_rangecan never be0when execution reaches line 488. This entireifblock (lines 488–492) is unreachable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/display_renderer.py` around lines 482 - 492, The branch checking price_range == 0 is unreachable because you already force price_range to 0.02 when it is < 0.01; remove or refactor that dead block and instead detect the "all prices equal" case before you nudge the range. Concretely, capture the original equality (e.g., if max_price == min_price) or check the pre-adjustment price_range, and if true draw the horizontal line with draw.line(...) using chart_x, chart_width and chart_height; otherwise perform the small-range adjustment logic that updates min_price, max_price and price_range as currently implemented.
116-169: Uselogger.exception()instead oflogger.error()and narrow exception types in font loading.Per static analysis (TRY400, BLE001),
logger.error()on lines 157 and 168 should belogger.exception()to automatically include the traceback. Also, catching bareExceptionis overly broad —OSErrorwould be more appropriate for font-loading failures.Suggested fix
- except Exception as e: - self.logger.error(f"Error loading font {font_name}: {e}, using default") + except OSError as e: + self.logger.exception(f"Error loading font {font_name}, using default") # Fall back to default font default_font_path = os.path.join('assets', 'fonts', 'PressStart2P-Regular.ttf') try: if os.path.exists(default_font_path): return ImageFont.truetype(default_font_path, font_size) else: self.logger.warning("Default font not found, using PIL default") return ImageFont.load_default() - except Exception as e: - self.logger.error(f"Error loading default font: {e}") + except OSError as e: + self.logger.exception("Error loading default font") return ImageFont.load_default()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/display_renderer.py` around lines 116 - 169, In _load_custom_font_from_element_config, replace the broad except Exception blocks used when loading fonts with except OSError (or OSError and IOError if preferred) and change the logging calls from self.logger.error(...) to self.logger.exception(...) so the traceback is captured; update the try/except around ImageFont.truetype(...) (both the BDF fallback and default-font load) to catch OSError and call self.logger.exception(...) with the same context message before falling back to ImageFont.load_default().
450-456:_get_stock_colorappears unused — color logic is handled inline.Both
create_stock_display(lines 226–229) andcreate_static_display(lines 342–345) determine colors inline using the instance's configured color attributes, never calling this method. This is dead code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/display_renderer.py` around lines 450 - 456, The _get_stock_color helper is dead code because color selection is done inline in create_stock_display and create_static_display; remove the _get_stock_color method from the DisplayRenderer class (and any associated unused imports) and update or delete any remaining references to _get_stock_color (search for "_get_stock_color" to be sure), then run tests/lint to confirm nothing else relies on it; if you prefer to keep centralized logic instead, replace the inline color logic in create_stock_display and create_static_display with calls to _get_stock_color(change) and remove the instance-configured color attributes instead — choose one approach and make the code consistent.
19-114: Redundant re-assignment ofstocks_customon line 111.Line 111 re-declares
stocks_custom = customization.get('stocks', {})which was already assigned on line 32 with the same expression. This shadows the outer variable with an identical value.Suggested fix
else: # New format: fonts at customization.stocks/crypto.symbol/price/price_delta # Use stocks font config (crypto can override later if needed, but currently shares fonts) - stocks_custom = customization.get('stocks', {}) self.symbol_font = self._load_custom_font_from_element_config(stocks_custom.get('symbol', {})) self.price_font = self._load_custom_font_from_element_config(stocks_custom.get('price', {})) self.price_delta_font = self._load_custom_font_from_element_config(stocks_custom.get('price_delta', {}))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/display_renderer.py` around lines 19 - 114, The constructor (__init__) unnecessarily reassigns stocks_custom = customization.get('stocks', {}) a second time; remove that redundant reassignment and reuse the stocks_custom variable already defined near the top of the method so the subsequent font-loading calls (self.symbol_font, self.price_font, self.price_delta_font) call _load_custom_font_from_element_config with the existing stocks_custom.get('...') values; ensure no other references rely on a redefined variable and run tests to confirm behavior unchanged.
507-528:_create_error_displayis private but called externally frommanager.py.In
manager.pyline 206, this method is invoked asself.display_renderer._create_error_display(). Since it's part of the public contract betweenStockTickerPluginandStockDisplayRenderer, rename it tocreate_error_display(drop the leading underscore) to reflect its actual visibility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/stocks/display_renderer.py` around lines 507 - 528, The method _create_error_display in StockDisplayRenderer is intended to be used from outside (invoked by StockTickerPlugin in manager.py via self.display_renderer._create_error_display()), so rename the method to create_error_display (remove the leading underscore) in plugins/stocks/display_renderer.py and update all call sites (e.g., the call in manager.py where self.display_renderer._create_error_display() is used) to self.display_renderer.create_error_display(); ensure the function signature and return type (Image.Image) remain the same and run tests to catch any remaining references to _create_error_display.plugin-repos/of-the-day/manager.py (1)
441-451: Cache fonts to avoid per-frame disk I/O on Raspberry Pi.Loading TTFs on every render is expensive on Pi hardware. Cache fonts once (e.g., via a helper) and reuse in both display paths.
As per coding guidelines, "Optimize code for Raspberry Pi's limited RAM and CPU capabilities".♻️ Possible refactor (cache fonts)
@@ -from typing import Dict, Any, List, Optional +from typing import Dict, Any, List, Optional, Tuple @@ # Register fonts self._register_fonts() + self._title_font: Optional[ImageFont.ImageFont] = None + self._body_font: Optional[ImageFont.ImageFont] = None + + def _get_fonts(self) -> Tuple[ImageFont.ImageFont, ImageFont.ImageFont]: + if self._title_font is None: + try: + self._title_font = ImageFont.truetype('assets/fonts/PressStart2P-Regular.ttf', 8) + except OSError as e: + self.logger.warning(f"Failed to load PressStart2P font: {e}, using fallback") + self._title_font = self.display_manager.small_font if hasattr(self.display_manager, 'small_font') else ImageFont.load_default() + if self._body_font is None: + try: + self._body_font = ImageFont.truetype('assets/fonts/4x6-font.ttf', 6) + except OSError as e: + self.logger.warning(f"Failed to load 4x6 font: {e}, using fallback") + self._body_font = self.display_manager.extra_small_font if hasattr(self.display_manager, 'extra_small_font') else ImageFont.load_default() + return self._title_font, self._body_font @@ - try: - title_font = ImageFont.truetype('assets/fonts/PressStart2P-Regular.ttf', 8) - except Exception as e: - self.logger.warning(f"Failed to load PressStart2P font: {e}, using fallback") - title_font = self.display_manager.small_font if hasattr(self.display_manager, 'small_font') else ImageFont.load_default() - - try: - body_font = ImageFont.truetype('assets/fonts/4x6-font.ttf', 6) - except Exception as e: - self.logger.warning(f"Failed to load 4x6 font: {e}, using fallback") - body_font = self.display_manager.extra_small_font if hasattr(self.display_manager, 'extra_small_font') else ImageFont.load_default() + title_font, body_font = self._get_fonts()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/manager.py` around lines 441 - 451, The code loads TTF fonts on each render (title_font/body_font in manager.py) causing per-frame disk I/O; cache and reuse fonts instead by introducing a single-font loader or cache (e.g., a get_font or FontCache helper) that loads PressStart2P-Regular.ttf and 4x6-font.ttf once and returns the cached ImageFont objects on subsequent calls; update the places that set title_font and body_font to call this helper (falling back to display_manager.small_font/extra_small_font or ImageFont.load_default() as currently done) so you avoid repeated ImageFont.truetype calls during rendering.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
CLAUDE.mdconfig/config_secrets.template.jsonplugin-repos/of-the-day/manager.pyplugin-repos/of-the-day/scripts/save_file.pyplugin-repos/of-the-day/scripts/toggle_category.pyplugin-repos/of-the-day/scripts/upload_file.pyplugin-repos/of-the-day/web_ui/file_manager.htmlplugins/stocks/display_renderer.pyplugins/stocks/manager.pyplugins/weather/manager.pyweb_interface/blueprints/api_v3.pyweb_interface/blueprints/pages_v3.pyweb_interface/templates/v3/partials/plugin_config.html
| # Validate filename | ||
| if not filename.endswith('.json'): | ||
| print(json.dumps({ | ||
| 'status': 'error', | ||
| 'message': 'File must be a JSON file (.json)' | ||
| })) | ||
| sys.exit(1) | ||
|
|
||
| # Validate JSON content | ||
| try: | ||
| data = json.loads(content) | ||
| except json.JSONDecodeError as e: | ||
| print(json.dumps({ | ||
| 'status': 'error', | ||
| 'message': f'Invalid JSON: {str(e)}' | ||
| })) | ||
| sys.exit(1) | ||
|
|
||
| # Validate structure (must be an object with day number keys 1-365) | ||
| if not isinstance(data, dict): | ||
| print(json.dumps({ | ||
| 'status': 'error', | ||
| 'message': 'JSON must be an object with day numbers (1-365) as keys' | ||
| })) | ||
| sys.exit(1) | ||
|
|
||
| # Check if keys are valid day numbers | ||
| for key in data.keys(): | ||
| try: | ||
| day_num = int(key) | ||
| if day_num < 1 or day_num > 365: | ||
| print(json.dumps({ | ||
| 'status': 'error', | ||
| 'message': f'Day number {day_num} is out of range (must be 1-365)' | ||
| })) | ||
| sys.exit(1) | ||
| except ValueError: | ||
| print(json.dumps({ | ||
| 'status': 'error', | ||
| 'message': f'Invalid key "{key}": all keys must be day numbers (1-365)' | ||
| })) | ||
| sys.exit(1) | ||
|
|
||
| # Save file | ||
| file_path = data_dir / filename | ||
| with open(file_path, 'w', encoding='utf-8') as f: |
There was a problem hiding this comment.
Add path traversal protection for filename.
Currently, ../ or path separators can escape the of_the_day directory. This is a write-through security risk.
🛡️ Fix
# Validate filename
if not filename.endswith('.json'):
print(json.dumps({
'status': 'error',
'message': 'File must be a JSON file (.json)'
}))
sys.exit(1)
+
+ if '..' in filename or '/' in filename or '\\' in filename:
+ print(json.dumps({
+ 'status': 'error',
+ 'message': 'Invalid filename'
+ }))
+ sys.exit(1)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Validate filename | |
| if not filename.endswith('.json'): | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': 'File must be a JSON file (.json)' | |
| })) | |
| sys.exit(1) | |
| # Validate JSON content | |
| try: | |
| data = json.loads(content) | |
| except json.JSONDecodeError as e: | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': f'Invalid JSON: {str(e)}' | |
| })) | |
| sys.exit(1) | |
| # Validate structure (must be an object with day number keys 1-365) | |
| if not isinstance(data, dict): | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': 'JSON must be an object with day numbers (1-365) as keys' | |
| })) | |
| sys.exit(1) | |
| # Check if keys are valid day numbers | |
| for key in data.keys(): | |
| try: | |
| day_num = int(key) | |
| if day_num < 1 or day_num > 365: | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': f'Day number {day_num} is out of range (must be 1-365)' | |
| })) | |
| sys.exit(1) | |
| except ValueError: | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': f'Invalid key "{key}": all keys must be day numbers (1-365)' | |
| })) | |
| sys.exit(1) | |
| # Save file | |
| file_path = data_dir / filename | |
| with open(file_path, 'w', encoding='utf-8') as f: | |
| # Validate filename | |
| if not filename.endswith('.json'): | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': 'File must be a JSON file (.json)' | |
| })) | |
| sys.exit(1) | |
| if '..' in filename or '/' in filename or '\\' in filename: | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': 'Invalid filename' | |
| })) | |
| sys.exit(1) | |
| # Validate JSON content | |
| try: | |
| data = json.loads(content) | |
| except json.JSONDecodeError as e: | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': f'Invalid JSON: {str(e)}' | |
| })) | |
| sys.exit(1) | |
| # Validate structure (must be an object with day number keys 1-365) | |
| if not isinstance(data, dict): | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': 'JSON must be an object with day numbers (1-365) as keys' | |
| })) | |
| sys.exit(1) | |
| # Check if keys are valid day numbers | |
| for key in data.keys(): | |
| try: | |
| day_num = int(key) | |
| if day_num < 1 or day_num > 365: | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': f'Day number {day_num} is out of range (must be 1-365)' | |
| })) | |
| sys.exit(1) | |
| except ValueError: | |
| print(json.dumps({ | |
| 'status': 'error', | |
| 'message': f'Invalid key "{key}": all keys must be day numbers (1-365)' | |
| })) | |
| sys.exit(1) | |
| # Save file | |
| file_path = data_dir / filename | |
| with open(file_path, 'w', encoding='utf-8') as f: |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 44-44: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugin-repos/of-the-day/scripts/upload_file.py` around lines 30 - 75, The
filename is not validated against path traversal, allowing "../" or absolute
paths to escape data_dir; update the upload routine (around the filename,
data_dir, and file_path usage) to sanitize and restrict paths: ensure filename
is not absolute and contains no path separators or ".." (reject if
os.path.isabs(filename) or os.path.sep in filename or '..' in filename),
restrict to a safe pattern (e.g., regex like r'^[A-Za-z0-9_.-]+\.json$'), then
construct file_path = (data_dir / filename).resolve() and verify file_path is
inside data_dir by checking
file_path.resolve().is_relative_to(data_dir.resolve()) or by comparing prefixes
(reject and exit with an error if the check fails) before opening the file.
plugins/stocks/manager.py
Outdated
| def _display_static(self, force_clear: bool = False) -> None: | ||
| """Display stocks in static mode - one at a time without scrolling.""" | ||
| # Signal not scrolling | ||
| self.display_manager.set_scrolling_state(False) | ||
|
|
||
| # Get current stock | ||
| symbols = list(self.stock_data.keys()) | ||
| if not symbols: | ||
| self._show_error_state() | ||
| return | ||
|
|
||
| current_symbol = symbols[self.current_stock_index % len(symbols)] | ||
| current_data = self.stock_data[current_symbol] | ||
|
|
||
| # Create static display | ||
| static_image = self.display_renderer.create_static_display(current_symbol, current_data) | ||
|
|
||
| # Update display - paste overwrites previous content (no need to clear) | ||
| self.display_manager.image.paste(static_image, (0, 0)) | ||
| self.display_manager.update_display() | ||
|
|
||
| # Move to next stock after a delay | ||
| time.sleep(2) # Show each stock for 2 seconds | ||
| self.current_stock_index += 1 |
There was a problem hiding this comment.
time.sleep(2) on line 178 blocks the display thread.
The display() method is called by the display controller in a loop. A 2-second blocking sleep inside _display_static freezes the entire thread — preventing deferred updates, config changes, and other plugin processing. The display controller should manage timing externally; the plugin should render and return immediately.
Consider tracking last_stock_switch_time and comparing against time.time() to decide when to advance the index, similar to how update() throttles data fetches.
Suggested approach
+ # In __init__:
+ self._last_static_switch = 0
+ self._static_display_seconds = 2
+
def _display_static(self, force_clear: bool = False) -> None:
"""Display stocks in static mode - one at a time without scrolling."""
self.display_manager.set_scrolling_state(False)
symbols = list(self.stock_data.keys())
if not symbols:
self._show_error_state()
return
current_symbol = symbols[self.current_stock_index % len(symbols)]
current_data = self.stock_data[current_symbol]
static_image = self.display_renderer.create_static_display(current_symbol, current_data)
self.display_manager.image.paste(static_image, (0, 0))
self.display_manager.update_display()
- # Move to next stock after a delay
- time.sleep(2) # Show each stock for 2 seconds
- self.current_stock_index += 1
+ now = time.time()
+ if now - self._last_static_switch >= self._static_display_seconds:
+ self.current_stock_index += 1
+ self._last_static_switch = now🧰 Tools
🪛 Ruff (0.15.2)
[warning] 156-156: Unused method argument: force_clear
(ARG002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/stocks/manager.py` around lines 156 - 179, The blocking time.sleep(2)
inside _display_static freezes the display thread; remove that sleep and instead
track a timestamp (e.g., add a last_stock_switch_time attribute) and an interval
constant (e.g., STOCK_DISPLAY_INTERVAL) and advance current_stock_index only
when time.time() - last_stock_switch_time >= STOCK_DISPLAY_INTERVAL; ensure
_display_static renders and returns immediately (no sleeps) and update
last_stock_switch_time when you increment current_stock_index so the display
controller can call display() in its loop without being blocked.
plugins/weather/manager.py
Outdated
| # Get coordinates using geocoding API | ||
| geo_url = f"http://api.openweathermap.org/geo/1.0/direct?q={city},{state},{country}&limit=1&appid={self.api_key}" | ||
|
|
There was a problem hiding this comment.
Use HTTPS for the geocoding call.
The API key is sent over plain HTTP here. Switch to HTTPS to avoid exposing it in transit.
🔒 Fix
- geo_url = f"http://api.openweathermap.org/geo/1.0/direct?q={city},{state},{country}&limit=1&appid={self.api_key}"
+ geo_url = f"https://api.openweathermap.org/geo/1.0/direct?q={city},{state},{country}&limit=1&appid={self.api_key}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Get coordinates using geocoding API | |
| geo_url = f"http://api.openweathermap.org/geo/1.0/direct?q={city},{state},{country}&limit=1&appid={self.api_key}" | |
| # Get coordinates using geocoding API | |
| geo_url = f"https://api.openweathermap.org/geo/1.0/direct?q={city},{state},{country}&limit=1&appid={self.api_key}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/weather/manager.py` around lines 336 - 338, The geocoding request
builds geo_url with plain "http", exposing self.api_key in transit; update the
URL construction for geo_url to use "https://api.openweathermap.org/..."
(replace "http" with "https") and ensure any other OpenWeatherMap endpoints in
this module use HTTPS as well so the API key (self.api_key) is sent securely.
plugins/weather/manager.py
Outdated
| # Rebuild the enabled modes list | ||
| self.modes = [] | ||
| if self.show_current: | ||
| self.modes.append('weather') | ||
| if self.show_hourly: | ||
| self.modes.append('hourly_forecast') | ||
| if self.show_daily: | ||
| self.modes.append('daily_forecast') | ||
| if not self.modes: | ||
| self.modes = ['weather'] | ||
|
|
||
| # Reset update timer and error state so new settings take effect immediately | ||
| self.last_update = 0 | ||
| self.consecutive_errors = 0 | ||
| self.error_backoff_time = 60 | ||
|
|
||
| # Clear layout cache since units/display settings may have changed | ||
| if hasattr(self, '_layout_cache'): | ||
| del self._layout_cache | ||
|
|
There was a problem hiding this comment.
Reset mode index/state after config changes.
When display modes are toggled, current_mode_index can point past the new list length and crash (IndexError). Reset index and current mode after rebuilding the list.
🔁 Fix mode reset
if not self.modes:
self.modes = ['weather']
+
+ self.current_mode_index = 0
+ self.current_display_mode = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Rebuild the enabled modes list | |
| self.modes = [] | |
| if self.show_current: | |
| self.modes.append('weather') | |
| if self.show_hourly: | |
| self.modes.append('hourly_forecast') | |
| if self.show_daily: | |
| self.modes.append('daily_forecast') | |
| if not self.modes: | |
| self.modes = ['weather'] | |
| # Reset update timer and error state so new settings take effect immediately | |
| self.last_update = 0 | |
| self.consecutive_errors = 0 | |
| self.error_backoff_time = 60 | |
| # Clear layout cache since units/display settings may have changed | |
| if hasattr(self, '_layout_cache'): | |
| del self._layout_cache | |
| # Rebuild the enabled modes list | |
| self.modes = [] | |
| if self.show_current: | |
| self.modes.append('weather') | |
| if self.show_hourly: | |
| self.modes.append('hourly_forecast') | |
| if self.show_daily: | |
| self.modes.append('daily_forecast') | |
| if not self.modes: | |
| self.modes = ['weather'] | |
| self.current_mode_index = 0 | |
| self.current_display_mode = None | |
| # Reset update timer and error state so new settings take effect immediately | |
| self.last_update = 0 | |
| self.consecutive_errors = 0 | |
| self.error_backoff_time = 60 | |
| # Clear layout cache since units/display settings may have changed | |
| if hasattr(self, '_layout_cache'): | |
| del self._layout_cache |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/weather/manager.py` around lines 970 - 989, The modes list is rebuilt
but current_mode_index/current_mode can become out-of-range; after rebuilding
self.modes (in the same block that resets last_update and clears _layout_cache)
reset the mode state by setting self.current_mode_index = 0 (or clamp it to
min(self.current_mode_index, len(self.modes)-1)) and update self.current_mode =
self.modes[self.current_mode_index] (or None if modes is empty); ensure this
runs whether or not self.modes changed and handle missing attributes safely
(check for existence of self.current_mode_index/self.current_mode before
assigning).
|
First, I really appreciate you trying to help and submit a PR! Thank you! But, can you make sure you are committing the plugin fixes to the ledmatrix-plugin repo. If you can move those over and then check out the Coderabbit suggestions to make sure we are trending towards better code quality. |
plugins/stocks/ and plugins/weather/ were accidentally committed to the LEDMatrix repo, bypassing the plugins/.gitignore rule. Plugin source code belongs in the ledmatrix-plugins/ledmatrix-stocks/ledmatrix-weather repos. The api_v3.py changes from the same commit (secrets masking/filtering) are intentionally kept here as they belong to the web interface. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- toggle_category.py: validate category_name against allowlist (letters/
numbers/_/-) to prevent path traversal in data_file construction; fix
bool() mis-parsing of string inputs ("false" now correctly parsed);
narrow broad except to (OSError, TypeError)
- upload_file.py: add path traversal check (../ / \) after .json check
- save_file.py: add mkdir(exist_ok=True) for data_dir; add .json extension
enforcement before path traversal check
- file_manager.html: add escapeJsString() helper; apply to all 4 inline
onclick/onchange interpolations to prevent XSS via filenames
- api_v3.py: resolve script_file and validate it is confined to plugin_dir
using relative_to(); replace all 'python3' subprocess calls with
sys.executable to avoid PATH surprises
- pages_v3.py: fail closed on secret masking errors (raise instead of
silent pass) to prevent leaking secrets on unexpected failures
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
of-the-day/manager.py: - Add _get_positive_float() helper to coerce and validate interval config values; use it for update_interval, display_rotate_interval, subtitle_rotate_interval in __init__ and on_config_change to prevent TypeError/ValueError from bad user config - Replace 2 bare except: blocks in _display_content font loading with except OSError and add self.logger.warning() for diagnostics - Add return type annotation to _register_fonts (-> None) web_interface/blueprints/api_v3.py: - Treat whitespace-only strings as empty in _drop_empty_secrets and _filter_empty_secrets (v.strip() == '' check) so " " doesn't overwrite stored secrets - Add live_priority: False to default plugin config returned when no config exists, preventing partial configs on first save - Update _mask_values to mask non-string non-empty values (numeric tokens etc.) not just non-empty non-placeholder strings Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
of-the-day/manager.py: - Add _get_fonts() method that loads title/body fonts once and caches them, replacing per-render ImageFont.truetype() calls to avoid disk I/O on every display() call on Pi hardware - Add clear() before PIL image creation in _display_no_data and _display_error to prevent stale buffer bleed-through - Fix bare except: in _display_no_data and _display_error font loading -> except OSError with warning log - Add Tuple to typing imports for _get_fonts() return annotation - Store _title_font/_body_font as None-initialized instance vars CLAUDE.md: - Fix weather secrets key: weather.api_key -> ledmatrix-weather.api_key - Add language identifiers to bare fenced blocks (MD040): directory tree and plugin lifecycle/file structure -> text - Add blank line before API endpoints table (MD058) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 10
♻️ Duplicate comments (1)
plugin-repos/of-the-day/web_ui/file_manager.html (1)
548-553:⚠️ Potential issue | 🟠 MajorInline handler escaping is still incomplete (
"is unescaped), leaving an XSS gap.
escapeJsString()does not escape double quotes, but values are injected into double-quoted HTML attributes. A crafted filename/category containing"can break the attribute and inject script.🔒 Minimal hardening patch
function escapeJsString(text) { return String(text) .replace(/\\/g, '\\\\') + .replace(/"/g, '"') .replace(/'/g, "\\'") + .replace(/</g, '\\x3C') + .replace(/>/g, '\\x3E') .replace(/\n/g, '\\n') .replace(/\r/g, '\\r'); }Also applies to: 568-573, 986-992
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/web_ui/file_manager.html` around lines 548 - 553, The onclick/onchange inline handlers inject values using escapeJsString which doesn't escape double quotes, enabling XSS; update usages in the file-card onclick (editFile call) and the input onchange (toggleCategory call) to pass safely encoded values: change the injected string to a properly escaped JS/HTML-safe form (e.g., escape double quotes and backslashes or serialize via JSON.stringify and then escape) and update or replace escapeJsString to also escape double quotes (") and backslashes so values cannot break the surrounding double-quoted HTML attribute; ensure the same fix is applied to the other occurrences noted (around lines 568-573 and 986-992) referencing editFile, toggleCategory, and escapeJsString.
🧹 Nitpick comments (4)
web_interface/blueprints/api_v3.py (1)
4755-4772: Consolidate duplicate empty-secret filters.
_drop_empty_secretsand_filter_empty_secretsdo the same work in the same flow. Keeping both increases drift risk; keep one helper and reuse it once.Also applies to: 4783-4795
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web_interface/blueprints/api_v3.py` around lines 4755 - 4772, The codebase defines two identical helpers _drop_empty_secrets and _filter_empty_secrets causing duplication; remove one and reuse the remaining function in both places where secrets_config is cleaned. Choose a single name (either _drop_empty_secrets or _filter_empty_secrets), delete the duplicate definition, and update all call sites (the two locations that currently call _drop_empty_secrets and _filter_empty_secrets) to call the chosen helper; ensure the chosen function retains the current recursive behavior and docstring so behavior is unchanged.plugin-repos/of-the-day/manager.py (1)
482-493: Use cached fonts in_display_title()as well.
_get_fonts()was added for caching, but title rendering still does per-call disk loads. This is a hot path on Raspberry Pi.⚡ Suggested fix
- # Load fonts - match old manager font usage - try: - title_font = ImageFont.truetype('assets/fonts/PressStart2P-Regular.ttf', 8) - except Exception as e: - self.logger.warning(f"Failed to load PressStart2P font: {e}, using fallback") - title_font = self.display_manager.small_font if hasattr(self.display_manager, 'small_font') else ImageFont.load_default() - - try: - body_font = ImageFont.truetype('assets/fonts/4x6-font.ttf', 6) - except Exception as e: - self.logger.warning(f"Failed to load 4x6 font: {e}, using fallback") - body_font = self.display_manager.extra_small_font if hasattr(self.display_manager, 'extra_small_font') else ImageFont.load_default() + # Load fonts (cached) + title_font, body_font = self._get_fonts()As per coding guidelines, "Optimize code for Raspberry Pi's limited RAM and CPU capabilities".
Also applies to: 602-604
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/manager.py` around lines 482 - 493, The title and body fonts are still loaded per-call inside _display_title() causing disk I/O on Raspberry Pi; update _display_title() to call the existing _get_fonts() caching helper and use the returned title_font and body_font instead of calling ImageFont.truetype inline (replace uses of PressStart2P-Regular.ttf and 4x6-font.ttf in _display_title()), and apply the same change to the similar logic around the other call site referenced (the block at the 602-604 area) so both places use the cached fonts from _get_fonts().plugin-repos/of-the-day/scripts/save_file.py (1)
89-94: Use targeted exceptions in the top-level error handler.A broad catch here blurs root causes and reduces troubleshooting quality.
🧰 Suggested fix
-except Exception as e: +except (OSError, json.JSONDecodeError, ValueError) as e: print(json.dumps({ 'status': 'error', 'message': str(e) })) sys.exit(1)As per coding guidelines, "Catch specific exceptions, not bare
except:statements".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/scripts/save_file.py` around lines 89 - 94, The top-level "except Exception as e" block in save_file.py should be narrowed to specific exceptions to avoid masking bugs; replace it with handlers for expected error types (e.g., OSError/IOError/PermissionError for file operations, json.JSONDecodeError for JSON issues, ValueError for parsing/validation) that each print the same JSON error structure and sys.exit(1), and add a final generic "except Exception" that re-raises unexpected exceptions (or logs them with full traceback) so truly unexpected errors are not silently swallowed; locate the try/except around the save/serialize logic where the current print(json.dumps(...)) is used and update the except clauses accordingly.plugin-repos/of-the-day/scripts/upload_file.py (1)
101-106: Narrow the top-level catch to expected exception classes.The final
except Exceptionmasks failure categories and makes operational triage harder than needed.🧰 Suggested fix
-except Exception as e: +except (OSError, json.JSONDecodeError, ValueError, ImportError) as e: print(json.dumps({ 'status': 'error', 'message': str(e) })) sys.exit(1)As per coding guidelines, "Catch specific exceptions, not bare
except:statements".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/scripts/upload_file.py` around lines 101 - 106, The broad except Exception block in the upload_file.py upload routine should be narrowed to only the expected error types (e.g., IOError/OSError for file ops, json.JSONDecodeError or ValueError for JSON handling, and requests.RequestException for network calls) and let other exceptions propagate; update the except clause(s) to catch those specific classes around the failing operations (the current except Exception as e block), still emit the same JSON error message and sys.exit(1) for handled errors, and avoid swallowing unexpected exceptions by either re-raising them or removing the blanket catch.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugin-repos/of-the-day/manager.py`:
- Around line 255-267: The code currently uses day_of_year (1-366) as the lookup
key which will fail on leap years because data files only contain keys 1-365;
update the lookup logic in the loop (where day_of_year is computed and used, and
where self.current_items is set) to normalize day 366 to 365 before
checking/assigning (e.g., compute a lookup_day = min(day_of_year, 365) or
explicitly remap 366 -> 365), then use that lookup_day string for data lookup,
logging, and assigning self.current_items[category_name].
- Around line 749-751: The get_display_duration method should validate and
coerce the config value to a positive float: read value =
self.config.get('display_duration', 40.0), attempt to convert it with
float(value), and if conversion fails or the result is <= 0, fallback to the
default 40.0 (and emit a warning via self.logger.warning or a module logger if
available); finally return the validated float. Ensure you update the
get_display_duration function to perform this conversion/check rather than
returning the raw config value.
- Around line 325-326: The code uses self.current_category_index to index
enabled_categories without validating it, causing out-of-range errors when
enabled_categories shrinks; before computing category_name and category_config
clamp or normalize self.current_category_index (e.g., if >=
len(enabled_categories) set to max(0, len(enabled_categories)-1) or wrap via
modulo) and then use the normalized index to get category_name and
category_config from enabled_categories and self.categories to avoid stale-index
errors.
- Around line 764-773: The override of on_config_change currently replaces base
behavior and can leave BasePlugin state (e.g., self.enabled and other shared
state) out of sync; call BasePlugin.on_config_change(self, config) (or
super().on_config_change(config)) from within manager.On_config_change (the
on_config_change method shown) so the base implementation runs, then proceed to
update self.config, update_interval, display_rotate_interval,
subtitle_rotate_interval, and self.categories as before to keep plugin state
consistent.
In `@plugin-repos/of-the-day/scripts/toggle_category.py`:
- Around line 47-56: The current try/except around opening and
json.load(config_file) only catches JSON decode errors and ValueError, so
filesystem errors (e.g., permission denied, IO errors) bubble up; update the
exception handling for the block that opens and loads config_file (the open(...)
and json.load(...) calls) to also catch OSError (or include IOError/OSError) and
surface those errors in the same JSON error payload (use the same process: print
a JSON object with 'status':'error' and message containing str(e) and exit),
ensuring the except still references the same variable name (e) so the printed
message remains consistent.
In `@web_interface/blueprints/api_v3.py`:
- Around line 5041-5050: The reset flow can leave config and secrets
inconsistent because _save_config_atomic writes current_config first and then
api_v3.config_manager.save_raw_file_content('secrets', current_secrets) may
fail; change to perform the secrets write before committing the main config (or
implement rollback): either write secrets first (respecting
default_secrets/preserve_secrets) and only call
_save_config_atomic(api_v3.config_manager, current_config, create_backup=True)
after secrets succeed, or if you must write config first, capture the backup
returned by _save_config_atomic and on secrets write failure call
_save_config_atomic(..., backup=that_backup) or use api_v3.config_manager to
restore the previous config and secrets to ensure atomicity; update logic around
default_secrets, preserve_secrets, _save_config_atomic, current_config and
current_secrets accordingly.
- Around line 2614-2634: If schema_mgr is missing or load_schema/plugin masking
fails, do not return unmasked plugin_config; instead fail closed: when
schema_mgr is falsy or schema_mgr.load_schema(plugin_id, use_cache=True) returns
None or raises, raise/return an error (HTTP 500/400) or apply a strict fallback
masker that wipes all scalar/string values (or at minimum all keys marked
x-secret and all string values) before returning. Locate the block around
schema_mgr, load_schema, _mask_secret_fields, plugin_config and plugin_id and
change the control flow so absence or exception from schema_mgr/load_schema
triggers the error response (or invokes the strict masker) rather than falling
through to return the original plugin_config.
- Line 5191: The functions execute_plugin_action and authenticate_spotify
contain function-local "import sys" statements that shadow the module-level sys
import and cause UnboundLocalError because sys.executable is referenced earlier
in those functions; remove the local "import sys" lines inside
execute_plugin_action and authenticate_spotify so the functions use the
module-level sys (already imported at top of the file), leaving any other sys
usages unchanged.
In `@web_interface/blueprints/pages_v3.py`:
- Around line 402-418: _find_secret_fields only recurses into nested objects, so
secret definitions under arrays (type:"array" with items.properties) are missed;
update _find_secret_fields to detect when field_props.get('type') == 'array' and
field_props.get('items') and recurse into field_props['items']['properties']
using the same full_path (or full_path + '[]' consistently) so those secret
paths are included in secrets_set, and update _mask_secrets to handle list
values by iterating over list elements and applying the same masking logic to
any dict elements (call _mask_secrets on each dict item with the same full_path)
so secrets inside array items are masked as well.
In `@web_interface/templates/v3/partials/plugin_config.html`:
- Around line 35-37: The template currently injects raw {{ plugin_id }} and {{
html_file }} into a JS string in loadWidget (var container / fetch call), which
risks breaking parsing or injection; change the template to JSON-serialize those
values (use Jinja2's tojson) and then in the JS build the path with
encodeURIComponent before calling fetch (i.e., use the JSON-serialized plugin_id
and html_file variables and pass them through encodeURIComponent when composing
the '/api/v3/plugins/.../static/...' URL in the fetch call), leaving field_id
usage unchanged.
---
Duplicate comments:
In `@plugin-repos/of-the-day/web_ui/file_manager.html`:
- Around line 548-553: The onclick/onchange inline handlers inject values using
escapeJsString which doesn't escape double quotes, enabling XSS; update usages
in the file-card onclick (editFile call) and the input onchange (toggleCategory
call) to pass safely encoded values: change the injected string to a properly
escaped JS/HTML-safe form (e.g., escape double quotes and backslashes or
serialize via JSON.stringify and then escape) and update or replace
escapeJsString to also escape double quotes (") and backslashes so values cannot
break the surrounding double-quoted HTML attribute; ensure the same fix is
applied to the other occurrences noted (around lines 568-573 and 986-992)
referencing editFile, toggleCategory, and escapeJsString.
---
Nitpick comments:
In `@plugin-repos/of-the-day/manager.py`:
- Around line 482-493: The title and body fonts are still loaded per-call inside
_display_title() causing disk I/O on Raspberry Pi; update _display_title() to
call the existing _get_fonts() caching helper and use the returned title_font
and body_font instead of calling ImageFont.truetype inline (replace uses of
PressStart2P-Regular.ttf and 4x6-font.ttf in _display_title()), and apply the
same change to the similar logic around the other call site referenced (the
block at the 602-604 area) so both places use the cached fonts from
_get_fonts().
In `@plugin-repos/of-the-day/scripts/save_file.py`:
- Around line 89-94: The top-level "except Exception as e" block in save_file.py
should be narrowed to specific exceptions to avoid masking bugs; replace it with
handlers for expected error types (e.g., OSError/IOError/PermissionError for
file operations, json.JSONDecodeError for JSON issues, ValueError for
parsing/validation) that each print the same JSON error structure and
sys.exit(1), and add a final generic "except Exception" that re-raises
unexpected exceptions (or logs them with full traceback) so truly unexpected
errors are not silently swallowed; locate the try/except around the
save/serialize logic where the current print(json.dumps(...)) is used and update
the except clauses accordingly.
In `@plugin-repos/of-the-day/scripts/upload_file.py`:
- Around line 101-106: The broad except Exception block in the upload_file.py
upload routine should be narrowed to only the expected error types (e.g.,
IOError/OSError for file ops, json.JSONDecodeError or ValueError for JSON
handling, and requests.RequestException for network calls) and let other
exceptions propagate; update the except clause(s) to catch those specific
classes around the failing operations (the current except Exception as e block),
still emit the same JSON error message and sys.exit(1) for handled errors, and
avoid swallowing unexpected exceptions by either re-raising them or removing the
blanket catch.
In `@web_interface/blueprints/api_v3.py`:
- Around line 4755-4772: The codebase defines two identical helpers
_drop_empty_secrets and _filter_empty_secrets causing duplication; remove one
and reuse the remaining function in both places where secrets_config is cleaned.
Choose a single name (either _drop_empty_secrets or _filter_empty_secrets),
delete the duplicate definition, and update all call sites (the two locations
that currently call _drop_empty_secrets and _filter_empty_secrets) to call the
chosen helper; ensure the chosen function retains the current recursive behavior
and docstring so behavior is unchanged.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
CLAUDE.mdplugin-repos/of-the-day/manager.pyplugin-repos/of-the-day/scripts/save_file.pyplugin-repos/of-the-day/scripts/toggle_category.pyplugin-repos/of-the-day/scripts/upload_file.pyplugin-repos/of-the-day/web_ui/file_manager.htmlweb_interface/blueprints/api_v3.pyweb_interface/blueprints/pages_v3.pyweb_interface/templates/v3/partials/plugin_config.html
| day_of_year = today.timetuple().tm_yday | ||
|
|
||
| for category_name, data in self.data_files.items(): | ||
| try: | ||
| # Find today's entry using day of year | ||
| day_key = str(day_of_year) | ||
|
|
||
| if day_key in data: | ||
| self.current_items[category_name] = data[day_key] | ||
| item_title = data[day_key].get('word', data[day_key].get('title', 'N/A')) | ||
| self.logger.info(f"Loaded item for {category_name} (day {day_of_year}): {item_title}") | ||
| else: | ||
| self.logger.warning(f"No entry found for day {day_of_year} in category {category_name}") |
There was a problem hiding this comment.
Handle leap-year day 366 explicitly.
The day key is derived as 1-366. Data files validated elsewhere to 1-365 will miss entries on leap-year day 366.
📅 Suggested fix
day_of_year = today.timetuple().tm_yday
for category_name, data in self.data_files.items():
try:
- day_key = str(day_of_year)
+ day_key = str(day_of_year)
+ if day_key not in data and day_of_year == 366:
+ day_key = '365'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugin-repos/of-the-day/manager.py` around lines 255 - 267, The code
currently uses day_of_year (1-366) as the lookup key which will fail on leap
years because data files only contain keys 1-365; update the lookup logic in the
loop (where day_of_year is computed and used, and where self.current_items is
set) to normalize day 366 to 365 before checking/assigning (e.g., compute a
lookup_day = min(day_of_year, 365) or explicitly remap 366 -> 365), then use
that lookup_day string for data lookup, logging, and assigning
self.current_items[category_name].
| category_name = enabled_categories[self.current_category_index] | ||
| category_config = self.categories.get(category_name, {}) |
There was a problem hiding this comment.
Normalize current_category_index before indexing enabled categories.
When enabled categories shrink (day change/config change), a stale index can go out of range and trigger the error screen path.
🧩 Suggested fix
if not enabled_categories:
if self.last_displayed_category != "NO_DATA":
self.last_displayed_category = "NO_DATA"
self._display_no_data()
return
+ self.current_category_index %= len(enabled_categories)
# Get current category
category_name = enabled_categories[self.current_category_index]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugin-repos/of-the-day/manager.py` around lines 325 - 326, The code uses
self.current_category_index to index enabled_categories without validating it,
causing out-of-range errors when enabled_categories shrinks; before computing
category_name and category_config clamp or normalize self.current_category_index
(e.g., if >= len(enabled_categories) set to max(0, len(enabled_categories)-1) or
wrap via modulo) and then use the normalized index to get category_name and
category_config from enabled_categories and self.categories to avoid stale-index
errors.
| def get_display_duration(self) -> float: | ||
| """Get display duration from config.""" | ||
| return self.config.get('display_duration', 40.0) |
There was a problem hiding this comment.
Return a validated positive float from get_display_duration().
This currently returns raw config data and can propagate invalid/non-numeric values into scheduling logic.
✅ Suggested fix
def get_display_duration(self) -> float:
"""Get display duration from config."""
- return self.config.get('display_duration', 40.0)
+ return self._get_positive_float(self.config, 'display_duration', 40.0)As per coding guidelines, "Validate inputs and handle errors early (Fail Fast principle)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugin-repos/of-the-day/manager.py` around lines 749 - 751, The
get_display_duration method should validate and coerce the config value to a
positive float: read value = self.config.get('display_duration', 40.0), attempt
to convert it with float(value), and if conversion fails or the result is <= 0,
fallback to the default 40.0 (and emit a warning via self.logger.warning or a
module logger if available); finally return the validated float. Ensure you
update the get_display_duration function to perform this conversion/check rather
than returning the raw config value.
| def on_config_change(self, config: Dict[str, Any]) -> None: | ||
| """Handle configuration changes (called when user updates config via web UI).""" | ||
| self.logger.info("Config changed, reloading categories") | ||
|
|
||
| # Update configuration | ||
| self.config = config | ||
| self.update_interval = self._get_positive_float(config, 'update_interval', 3600) | ||
| self.display_rotate_interval = self._get_positive_float(config, 'display_rotate_interval', 20) | ||
| self.subtitle_rotate_interval = self._get_positive_float(config, 'subtitle_rotate_interval', 10) | ||
| self.categories = config.get('categories', {}) |
There was a problem hiding this comment.
Call base on_config_change to keep plugin state consistent.
The override bypasses BasePlugin.on_config_change, so self.enabled and shared base state can drift from updated config.
🔧 Suggested fix
def on_config_change(self, config: Dict[str, Any]) -> None:
"""Handle configuration changes (called when user updates config via web UI)."""
+ super().on_config_change(config)
self.logger.info("Config changed, reloading categories")
# Update configuration
- self.config = config
self.update_interval = self._get_positive_float(config, 'update_interval', 3600)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugin-repos/of-the-day/manager.py` around lines 764 - 773, The override of
on_config_change currently replaces base behavior and can leave BasePlugin state
(e.g., self.enabled and other shared state) out of sync; call
BasePlugin.on_config_change(self, config) (or super().on_config_change(config))
from within manager.On_config_change (the on_config_change method shown) so the
base implementation runs, then proceed to update self.config, update_interval,
display_rotate_interval, subtitle_rotate_interval, and self.categories as before
to keep plugin state consistent.
| try: | ||
| if config_file.exists(): | ||
| with open(config_file, 'r', encoding='utf-8') as f: | ||
| config = json.load(f) | ||
| except (json.JSONDecodeError, ValueError) as e: | ||
| print(json.dumps({ | ||
| 'status': 'error', | ||
| 'message': f'Failed to load config: {str(e)}' | ||
| })) | ||
| sys.exit(1) |
There was a problem hiding this comment.
Include filesystem errors when loading config.json.
Read failures like permissions/IO issues currently bypass this handler and can return an unstructured traceback instead of the expected JSON error payload.
🛠️ Suggested fix
-except (json.JSONDecodeError, ValueError) as e:
+except (OSError, json.JSONDecodeError, ValueError) as e:
print(json.dumps({
'status': 'error',
'message': f'Failed to load config: {str(e)}'
}))
sys.exit(1)🧰 Tools
🪛 Ruff (0.15.2)
[warning] 54-54: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugin-repos/of-the-day/scripts/toggle_category.py` around lines 47 - 56, The
current try/except around opening and json.load(config_file) only catches JSON
decode errors and ValueError, so filesystem errors (e.g., permission denied, IO
errors) bubble up; update the exception handling for the block that opens and
loads config_file (the open(...) and json.load(...) calls) to also catch OSError
(or include IOError/OSError) and surface those errors in the same JSON error
payload (use the same process: print a JSON object with 'status':'error' and
message containing str(e) and exit), ensuring the except still references the
same variable name (e) so the printed message remains consistent.
| if schema_mgr: | ||
| try: | ||
| schema_for_mask = schema_mgr.load_schema(plugin_id, use_cache=True) | ||
| if schema_for_mask and 'properties' in schema_for_mask: | ||
| def _mask_secret_fields(cfg, props): | ||
| result = dict(cfg) | ||
| for fname, fprops in props.items(): | ||
| if fprops.get('x-secret', False): | ||
| if fname in result and result[fname]: | ||
| result[fname] = '' | ||
| elif fprops.get('type') == 'object' and 'properties' in fprops: | ||
| if fname in result and isinstance(result[fname], dict): | ||
| result[fname] = _mask_secret_fields( | ||
| result[fname], fprops['properties'] | ||
| ) | ||
| return result | ||
| plugin_config = _mask_secret_fields(plugin_config, schema_for_mask['properties']) | ||
| except Exception as mask_err: | ||
| import logging as _logging | ||
| _logging.warning("Could not mask secret fields for %s: %s", plugin_id, mask_err) | ||
|
|
There was a problem hiding this comment.
Fail closed when schema manager is unavailable to avoid secret leakage.
On Line 2614, masking only runs when schema_mgr exists. If it is unavailable, this endpoint can return merged secret values from load_config() unmasked. Return an error (or apply a strict fallback masker) instead of returning config in that state.
🔒 Suggested fix
- if schema_mgr:
+ if not schema_mgr:
+ logger.error(
+ "Schema manager unavailable for plugin config GET (plugin_id=%s); refusing to return merged config",
+ plugin_id
+ )
+ return error_response(
+ ErrorCode.SYSTEM_ERROR,
+ 'Schema manager not initialized',
+ status_code=503
+ )
+ if schema_mgr:
try:
schema_for_mask = schema_mgr.load_schema(plugin_id, use_cache=True)🧰 Tools
🪛 Ruff (0.15.2)
[warning] 2622-2622: Unnecessary key check before dictionary access
Replace with dict.get
(RUF019)
[warning] 2631-2631: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/api_v3.py` around lines 2614 - 2634, If schema_mgr
is missing or load_schema/plugin masking fails, do not return unmasked
plugin_config; instead fail closed: when schema_mgr is falsy or
schema_mgr.load_schema(plugin_id, use_cache=True) returns None or raises,
raise/return an error (HTTP 500/400) or apply a strict fallback masker that
wipes all scalar/string values (or at minimum all keys marked x-secret and all
string values) before returning. Locate the block around schema_mgr,
load_schema, _mask_secret_fields, plugin_config and plugin_id and change the
control flow so absence or exception from schema_mgr/load_schema triggers the
error response (or invokes the strict masker) rather than falling through to
return the original plugin_config.
| # Save updated configs (atomic save to prevent corruption) | ||
| success, error_msg = _save_config_atomic(api_v3.config_manager, current_config, create_backup=True) | ||
| if not success: | ||
| return error_response( | ||
| ErrorCode.CONFIG_SAVE_FAILED, | ||
| f"Failed to reset configuration: {error_msg}", | ||
| status_code=500 | ||
| ) | ||
| if default_secrets or not preserve_secrets: | ||
| api_v3.config_manager.save_raw_file_content('secrets', current_secrets) |
There was a problem hiding this comment.
Reset flow can leave config files partially updated.
Line 5042 saves main config first, then Line 5050 saves secrets. If the second write fails, main is already reset and secrets are stale, producing inconsistent state. Add rollback or transactional ordering with recovery.
💾 Suggested rollback pattern
- success, error_msg = _save_config_atomic(api_v3.config_manager, current_config, create_backup=True)
+ previous_main = api_v3.config_manager.load_config()
+ success, error_msg = _save_config_atomic(api_v3.config_manager, current_config, create_backup=True)
if not success:
return error_response(
ErrorCode.CONFIG_SAVE_FAILED,
f"Failed to reset configuration: {error_msg}",
status_code=500
)
- if default_secrets or not preserve_secrets:
- api_v3.config_manager.save_raw_file_content('secrets', current_secrets)
+ try:
+ if default_secrets or not preserve_secrets:
+ api_v3.config_manager.save_raw_file_content('secrets', current_secrets)
+ except Exception:
+ _save_config_atomic(api_v3.config_manager, previous_main, create_backup=False)
+ raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/api_v3.py` around lines 5041 - 5050, The reset flow
can leave config and secrets inconsistent because _save_config_atomic writes
current_config first and then
api_v3.config_manager.save_raw_file_content('secrets', current_secrets) may
fail; change to perform the secrets write before committing the main config (or
implement rollback): either write secrets first (respecting
default_secrets/preserve_secrets) and only call
_save_config_atomic(api_v3.config_manager, current_config, create_backup=True)
after secrets succeed, or if you must write config first, capture the backup
returned by _save_config_atomic and on secrets write failure call
_save_config_atomic(..., backup=that_backup) or use api_v3.config_manager to
restore the previous config and secrets to ensure atomicity; update logic around
default_secrets, preserve_secrets, _save_config_atomic, current_config and
current_secrets accordingly.
| try: | ||
| result = subprocess.run( | ||
| ['python3', wrapper_path], | ||
| [sys.executable, wrapper_path], |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's check if the file exists and get its line count
wc -l web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 104
🏁 Script executed:
#!/bin/bash
# Check the specific lines mentioned (5191, 5224, 5323, 5434) with context
echo "=== Around line 5191 ==="
sed -n '5180,5200p' web_interface/blueprints/api_v3.py
echo -e "\n=== Around line 5224 ==="
sed -n '5215,5235p' web_interface/blueprints/api_v3.py
echo -e "\n=== Around line 5323 ==="
sed -n '5315,5335p' web_interface/blueprints/api_v3.py
echo -e "\n=== Around line 5434 ==="
sed -n '5425,5445p' web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 3503
🏁 Script executed:
#!/bin/bash
# Check if sys is imported at module level
head -50 web_interface/blueprints/api_v3.py | grep -n "import sys"Repository: ChuckBuilds/LEDMatrix
Length of output: 77
🏁 Script executed:
#!/bin/bash
# Use Python to analyze the functions containing these lines for proper scope analysis
python3 - <<'PYEOF'
import ast
from pathlib import Path
path = Path("web_interface/blueprints/api_v3.py")
code = path.read_text(encoding="utf-8")
tree = ast.parse(code)
# Find functions that contain lines 5191, 5224, 5323, 5434
target_lines = {5191, 5224, 5323, 5434}
def get_func_containing_line(line_no, tree):
"""Find the function definition that contains a given line."""
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
if hasattr(node, 'lineno') and hasattr(node, 'end_lineno'):
if node.lineno <= line_no <= node.end_lineno:
return node
return None
def analyze_sys_in_function(fn):
"""Find all sys uses and imports in a function."""
sys_imports = []
sys_uses = []
for node in ast.walk(fn):
if isinstance(node, ast.Import):
for alias in node.names:
if alias.name == "sys":
sys_imports.append((node.lineno, "import sys"))
elif isinstance(node, ast.ImportFrom):
if node.module == "sys":
sys_imports.append((node.lineno, f"from sys import ..."))
# Look for sys.executable or sys.X attribute access
if isinstance(node, ast.Attribute):
if isinstance(node.value, ast.Name) and node.value.id == "sys":
sys_uses.append((node.lineno, f"sys.{node.attr}"))
# Look for bare sys references
elif isinstance(node, ast.Name) and node.id == "sys" and isinstance(node.ctx, ast.Load):
sys_uses.append((node.lineno, "sys reference"))
return sorted(set(sys_imports)), sorted(set(sys_uses))
analyzed = {}
for line_no in target_lines:
fn = get_func_containing_line(line_no, tree)
if fn:
imports, uses = analyze_sys_in_function(fn)
analyzed[line_no] = {
"function": fn.name,
"func_start": fn.lineno,
"func_end": fn.end_lineno,
"imports": imports,
"uses": uses
}
print(f"Line {line_no}: in function '{fn.name}' (lines {fn.lineno}-{fn.end_lineno})")
print(f" sys imports at: {imports}")
print(f" sys uses at: {uses}")
if uses and imports:
first_use = uses[0][0]
first_import = imports[0][0]
if first_use < first_import:
print(f" ⚠️ ISSUE: sys used at line {first_use} BEFORE imported at line {first_import}")
print()
PYEOFRepository: ChuckBuilds/LEDMatrix
Length of output: 1513
🏁 Script executed:
#!/bin/bash
# Check the local import sys statements that cause the issue
echo "=== Local import sys at line 5270 ==="
sed -n '5265,5275p' web_interface/blueprints/api_v3.py
echo -e "\n=== Local import sys at line 5461 ==="
sed -n '5455,5465p' web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 1347
🏁 Script executed:
#!/bin/bash
# Get broader context to understand function structure
echo "=== execute_plugin_action function start ==="
sed -n '5077,5090p' web_interface/blueprints/api_v3.py
echo -e "\n=== authenticate_spotify function start ==="
sed -n '5374,5385p' web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 1212
Remove function-local import sys statements that shadow the module-level import and cause UnboundLocalError at runtime.
Both execute_plugin_action() (line 5270) and authenticate_spotify() (line 5461) contain local import sys statements that make sys a local variable for the entire function scope. However, sys.executable is referenced much earlier in each function—at lines 5191, 5224, 5323 in execute_plugin_action() and line 5434 in authenticate_spotify()—before the local import. This causes UnboundLocalError at runtime. Since sys is imported at the module level (line 5), the local imports are redundant and should be removed.
Fix locations
@@ execute_plugin_action() around line 5270
if action_def.get('oauth_flow'):
# Import script as module to get auth URL
- import sys
import importlib.util@@ authenticate_spotify() around line 5461
# Import the script's functions directly to get the auth URL
- import sys
import importlib.util🧰 Tools
🪛 Ruff (0.15.2)
[error] 5191-5191: Local variable sys referenced before assignment
(F823)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/api_v3.py` at line 5191, The functions
execute_plugin_action and authenticate_spotify contain function-local "import
sys" statements that shadow the module-level sys import and cause
UnboundLocalError because sys.executable is referenced earlier in those
functions; remove the local "import sys" lines inside execute_plugin_action and
authenticate_spotify so the functions use the module-level sys (already imported
at top of the file), leaving any other sys usages unchanged.
| def _find_secret_fields(properties, prefix=''): | ||
| fields = set() | ||
| for field_name, field_props in properties.items(): | ||
| full_path = f"{prefix}.{field_name}" if prefix else field_name | ||
| if isinstance(field_props, dict) and field_props.get('x-secret', False): | ||
| fields.add(full_path) | ||
| if isinstance(field_props, dict) and field_props.get('type') == 'object' and 'properties' in field_props: | ||
| fields.update(_find_secret_fields(field_props['properties'], full_path)) | ||
| return fields | ||
|
|
||
| def _mask_secrets(config_dict, secrets_set, prefix=''): | ||
| masked = {} | ||
| for key, value in config_dict.items(): | ||
| full_path = f"{prefix}.{key}" if prefix else key | ||
| if isinstance(value, dict): | ||
| masked[key] = _mask_secrets(value, secrets_set, full_path) | ||
| elif full_path in secrets_set: |
There was a problem hiding this comment.
Secret masking skips x-secret fields inside array items.
The traversal/masking logic only handles nested objects. If a schema places secret fields under type: "array" → items.properties, those values are left unmasked and can still be rendered.
🔐 Suggested fix
def _find_secret_fields(properties, prefix=''):
fields = set()
for field_name, field_props in properties.items():
full_path = f"{prefix}.{field_name}" if prefix else field_name
if isinstance(field_props, dict) and field_props.get('x-secret', False):
fields.add(full_path)
if isinstance(field_props, dict) and field_props.get('type') == 'object' and 'properties' in field_props:
fields.update(_find_secret_fields(field_props['properties'], full_path))
+ if isinstance(field_props, dict) and field_props.get('type') == 'array':
+ items = field_props.get('items', {})
+ if isinstance(items, dict) and items.get('type') == 'object' and 'properties' in items:
+ fields.update(_find_secret_fields(items['properties'], f"{full_path}[]"))
return fields
def _mask_secrets(config_dict, secrets_set, prefix=''):
masked = {}
for key, value in config_dict.items():
full_path = f"{prefix}.{key}" if prefix else key
if isinstance(value, dict):
masked[key] = _mask_secrets(value, secrets_set, full_path)
+ elif isinstance(value, list):
+ masked[key] = [
+ _mask_secrets(item, secrets_set, f"{full_path}[]") if isinstance(item, dict) else item
+ for item in value
+ ]
elif full_path in secrets_set:
masked[key] = ''
else:
masked[key] = value
return maskedAlso applies to: 424-427
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/pages_v3.py` around lines 402 - 418,
_find_secret_fields only recurses into nested objects, so secret definitions
under arrays (type:"array" with items.properties) are missed; update
_find_secret_fields to detect when field_props.get('type') == 'array' and
field_props.get('items') and recurse into field_props['items']['properties']
using the same full_path (or full_path + '[]' consistently) so those secret
paths are included in secrets_set, and update _mask_secrets to handle list
values by iterating over list elements and applying the same masking logic to
any dict elements (call _mask_secrets on each dict item with the same full_path)
so secrets inside array items are masked as well.
| var container = document.getElementById('{{ field_id }}_custom_html'); | ||
| if (!container) { setTimeout(loadWidget, 100); return; } | ||
| fetch('/api/v3/plugins/{{ plugin_id }}/static/{{ html_file }}') |
There was a problem hiding this comment.
Escape JS-context values before building the custom widget fetch URL.
plugin_id and html_file are interpolated directly into a JavaScript string. Special characters can break parsing or open an injection path. Serialize with tojson and URL-encode path segments.
🛡️ Suggested fix
- fetch('/api/v3/plugins/{{ plugin_id }}/static/{{ html_file }}')
+ var pluginId = {{ plugin_id|tojson }};
+ var htmlFile = {{ html_file|tojson }};
+ fetch('/api/v3/plugins/' + encodeURIComponent(pluginId) + '/static/' + encodeURIComponent(htmlFile))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var container = document.getElementById('{{ field_id }}_custom_html'); | |
| if (!container) { setTimeout(loadWidget, 100); return; } | |
| fetch('/api/v3/plugins/{{ plugin_id }}/static/{{ html_file }}') | |
| var container = document.getElementById('{{ field_id }}_custom_html'); | |
| if (!container) { setTimeout(loadWidget, 100); return; } | |
| var pluginId = {{ plugin_id|tojson }}; | |
| var htmlFile = {{ html_file|tojson }}; | |
| fetch('/api/v3/plugins/' + encodeURIComponent(pluginId) + '/static/' + encodeURIComponent(htmlFile)) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/templates/v3/partials/plugin_config.html` around lines 35 - 37,
The template currently injects raw {{ plugin_id }} and {{ html_file }} into a JS
string in loadWidget (var container / fetch call), which risks breaking parsing
or injection; change the template to JSON-serialize those values (use Jinja2's
tojson) and then in the JS build the path with encodeURIComponent before calling
fetch (i.e., use the JSON-serialized plugin_id and html_file variables and pass
them through encodeURIComponent when composing the
'/api/v3/plugins/.../static/...' URL in the fetch call), leaving field_id usage
unchanged.
display_manager.py: - Add 4x6 BDF font loading with set_pixel_sizes for compact body text - Initialize 5x7 BDF with set_pixel_sizes(0, 7) for proper metrics api_v3.py: - Fix on-demand display loading wrong plugin after service restart of-the-day manager.py: - Switch body text from TTF to BDF 4x6 font for crisp LED rendering - Replace logging.getLogger with get_logger per project standards web-ui-info manager.py: - Replace bare except with except Exception - Replace logging.getLogger with get_logger per project standards test infrastructure: - Add test/__init__.py to enable plugin test collection (6 tests unblocked) - Fix test_web_api mock: use get_raw_file_content (matches endpoint behavior) - Fix test_display_controller: route schedule config through config_service - Fix test_config_validation_edge_cases: use ConfigManager API correctly Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ecret filter CLAUDE.md: - Add comprehensive CodeRabbit review checklist covering security, error handling, code quality, config, API design, and thread safety - Based on analysis of 3 CodeRabbit reviews (PR ChuckBuilds#2, ChuckBuilds#42, ChuckBuilds#269) api_v3.py: - Fix _drop_empty_secrets to filter None values (match _filter_empty_secrets) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (3)
web_interface/blueprints/api_v3.py (3)
5207-5207:⚠️ Potential issue | 🔴 CriticalRemove function-local
import systo preventUnboundLocalError.
execute_plugin_action()andauthenticate_spotify()still have localimport sysstatements (Line 5286 and Line 5477). Sincesys.executableis referenced earlier (e.g., Line 5207, Line 5339, Line 5450), Python treatssysas local and can raiseUnboundLocalError.🔧 Suggested fix
- import sys import importlib.util- import sys import importlib.util#!/bin/bash python3 - <<'PY' import ast from pathlib import Path path = Path("web_interface/blueprints/api_v3.py") tree = ast.parse(path.read_text(encoding="utf-8")) targets = {"execute_plugin_action", "authenticate_spotify"} for node in tree.body: if isinstance(node, ast.FunctionDef) and node.name in targets: local_sys_imports = sorted( n.lineno for n in ast.walk(node) if isinstance(n, ast.Import) and any(a.name == "sys" for a in n.names) ) sys_refs = sorted( n.lineno for n in ast.walk(node) if isinstance(n, ast.Attribute) and isinstance(n.value, ast.Name) and n.value.id == "sys" ) print(f"{node.name}: local_imports={local_sys_imports}, sys_refs={sys_refs}") if local_sys_imports and sys_refs and min(sys_refs) < min(local_sys_imports): print(f"ISSUE: sys referenced before local import in {node.name}") PYAlso applies to: 5339-5339, 5450-5450
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web_interface/blueprints/api_v3.py` at line 5207, The functions execute_plugin_action and authenticate_spotify contain local "import sys" statements which shadow the module-level sys reference (used earlier as sys.executable) and can cause UnboundLocalError; remove the function-local "import sys" lines and rely on a single module-level "import sys" (or add one at top of the file) so that references like sys.executable and any other sys.<attr> inside execute_plugin_action and authenticate_spotify resolve to the module-level sys rather than a local variable.
2627-2650:⚠️ Potential issue | 🟠 MajorFail closed if secret masking cannot be applied.
If
schema_mgris missing or schema loading fails, the current control flow can still return mergedplugin_configunmasked.🔒 Suggested fix
- if schema_mgr: - try: + if not schema_mgr: + logger.error("Schema manager unavailable for plugin config GET (plugin_id=%s)", plugin_id) + return error_response( + ErrorCode.SYSTEM_ERROR, + 'Schema manager not initialized', + status_code=503 + ) + try: schema_for_mask = schema_mgr.load_schema(plugin_id, use_cache=True) - if schema_for_mask and 'properties' in schema_for_mask: + if not schema_for_mask or 'properties' not in schema_for_mask: + return error_response( + ErrorCode.CONFIG_LOAD_FAILED, + f'Unable to load schema for {plugin_id}', + status_code=500 + ) + if schema_for_mask and 'properties' in schema_for_mask: def _mask_secret_fields(cfg, props): @@ - except Exception as mask_err: - import logging as _logging - _logging.warning("Could not mask secret fields for %s: %s", plugin_id, mask_err) + except Exception as mask_err: + logger.error("Could not mask secret fields for %s: %s", plugin_id, mask_err, exc_info=True) + return error_response( + ErrorCode.CONFIG_LOAD_FAILED, + f'Failed to mask secrets for {plugin_id}', + status_code=500 + )As per coding guidelines: "Validate inputs and handle errors early (Fail Fast principle)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web_interface/blueprints/api_v3.py` around lines 2627 - 2650, Ensure the GET path fails closed when secret masking cannot be applied: around the plugin secret-masking block that uses schema_mgr.load_schema(plugin_id) and the helper _mask_secret_fields, if schema_mgr is falsy, schema_for_mask is None/missing 'properties', or an exception (mask_err) occurs, do not return the unmasked plugin_config — instead abort/raise an error (HTTP 400/500 or appropriate exception) so the request fails rather than leaking secrets; update the try/except and the precondition checks for schema_mgr and schema_for_mask to enforce this fail-closed behavior and surface a clear error referencing plugin_id.
5057-5066:⚠️ Potential issue | 🟠 MajorReset flow still risks partial updates across main/secrets files.
Line 5058 commits main config before Line 5066 writes secrets; if the second write fails, state becomes inconsistent.
As per coding guidelines: "Validate inputs and handle errors early (Fail Fast principle)".
🧹 Nitpick comments (8)
test/test_config_validation_edge_cases.py (1)
33-58: Consider parameterizing the three invalid-JSON tests to reduce duplication.All three tests follow the same setup/assert flow and can be consolidated for readability and easier maintenance.
Refactor sketch
class TestInvalidJson: """Test handling of invalid JSON in config files.""" - def test_invalid_json_syntax(self, tmp_path): - """Config with invalid JSON syntax should raise ConfigError.""" - config_file = tmp_path / "config.json" - config_file.write_text("{ invalid json }") - - config_manager = ConfigManager(config_path=str(config_file)) - with pytest.raises(ConfigError): - config_manager.load_config() - - def test_truncated_json(self, tmp_path): - """Config with truncated JSON should raise ConfigError.""" - config_file = tmp_path / "config.json" - config_file.write_text('{"plugin": {"enabled": true') # Missing closing braces - - config_manager = ConfigManager(config_path=str(config_file)) - with pytest.raises(ConfigError): - config_manager.load_config() - - def test_empty_config_file(self, tmp_path): - """Empty config file should raise ConfigError.""" + `@pytest.mark.parametrize`( + "payload", + [ + "{ invalid json }", + '{"plugin": {"enabled": true', + "", + ], + ) + def test_invalid_json_inputs_raise_config_error(self, tmp_path, payload): + """Invalid JSON payloads should raise ConfigError.""" config_file = tmp_path / "config.json" - config_file.write_text("") + config_file.write_text(payload) config_manager = ConfigManager(config_path=str(config_file)) with pytest.raises(ConfigError): config_manager.load_config()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_config_validation_edge_cases.py` around lines 33 - 58, Combine the three duplicate tests into a single parametrized pytest function: create a new test like test_invalid_json_variants that is decorated with `@pytest.mark.parametrize` and supplies the three different file contents (invalid syntax "{ invalid json }", truncated '{"plugin": {"enabled": true', and empty ""). In the test body, write the param content to a temp config_file, instantiate ConfigManager(config_path=str(config_file)), and assert with pytest.raises(ConfigError) when calling ConfigManager.load_config(); reference the existing test names (test_invalid_json_syntax, test_truncated_json, test_empty_config_file) when replacing them so reviewers can find the consolidated test easily.plugin-repos/web-ui-info/manager.py (1)
304-305: Avoid silently swallowing fallback cleanup errors.Line 304 currently suppresses all cleanup failures, which makes remote troubleshooting much harder on device deployments. Log this path at least once.
As per coding guidelines: "Implement comprehensive logging for remote debugging on Raspberry Pi" and "Provide clear error messages for troubleshooting".Proposed patch
- except Exception: - pass + except Exception as e: + self.logger.warning(f"Fallback display cleanup failed: {e}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/web-ui-info/manager.py` around lines 304 - 305, Replace the silent "except Exception: pass" in plugin-repos/web-ui-info/manager.py with a logged error so cleanup failures are visible; locate the bare except block (the exact "except Exception: pass") and call the module/class logger (e.g. self.logger.exception(...) or logger.exception(...)/logging.getLogger(__name__).exception(...)) with a clear message like "Fallback cleanup failed in <method_name>" and include the exception details; ensure you do this once in that catch path (don't re-raise) so remote troubleshooting on devices can see the failure.plugin-repos/of-the-day/manager.py (2)
482-493: Use cached_get_fonts()instead of reloading fonts from disk.
_display_titleloads fonts from disk on every call, while_display_content(line 608) uses the cached_get_fonts()method. This inconsistency causes unnecessary disk I/O on Raspberry Pi during the title rotation phase.♻️ Suggested fix
- # Load fonts - match old manager font usage - try: - title_font = ImageFont.truetype('assets/fonts/PressStart2P-Regular.ttf', 8) - except Exception as e: - self.logger.warning(f"Failed to load PressStart2P font: {e}, using fallback") - title_font = self.display_manager.small_font if hasattr(self.display_manager, 'small_font') else ImageFont.load_default() - - body_font = getattr(self.display_manager, 'bdf_4x6_font', None) - if body_font is None: - body_font = getattr(self.display_manager, 'bdf_5x7_font', None) - if body_font is None: - body_font = getattr(self.display_manager, 'extra_small_font', - ImageFont.load_default()) + # Load fonts (cached after first call to avoid per-frame disk I/O) + title_font, body_font = self._get_fonts()As per coding guidelines: "Optimize code for Raspberry Pi's limited RAM and CPU capabilities".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/manager.py` around lines 482 - 493, _display_title is loading fonts directly from disk each call (using ImageFont.truetype and getattr on display_manager fonts) causing extra I/O on Raspberry Pi; change it to call the existing cached _get_fonts() helper (which _display_content already uses) to obtain title_font and body_font, replace the ImageFont.truetype and the getattr fallback logic in _display_title with the fonts returned by _get_fonts(), and keep the same fallback to display_manager.small_font or ImageFont.load_default() if the cached result is missing.
800-804: Consider callingsuper().cleanup()for consistency.If
BasePlugin.cleanup()has any base-class teardown logic, it won't be invoked.🧹 Suggested fix
def cleanup(self) -> None: """Cleanup resources.""" + super().cleanup() self.current_items = {} self.data_files = {} self.logger.info("Of The Day plugin cleaned up")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/manager.py` around lines 800 - 804, The cleanup method in the class overrides BasePlugin.cleanup but never calls the base-class teardown; update the cleanup implementation in cleanup() to first (or last) invoke super().cleanup() so any BasePlugin.cleanup() logic runs, then clear self.current_items and self.data_files and log via self.logger.info as before; reference the cleanup() method and BasePlugin.cleanup() to locate where to add the super() call.test/test_web_api.py (1)
146-153: Consider verifying the argument passed toget_raw_file_contentfor secrets.For consistency with
test_get_main_config(line 111) which verifies the argument is'main', this test should verify the argument is'secrets':- mock_config_manager.get_raw_file_content.assert_called_once() + mock_config_manager.get_raw_file_content.assert_called_once_with('secrets')This ensures the secrets endpoint doesn't accidentally call
get_raw_file_content('main').🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_web_api.py` around lines 146 - 153, The test test_get_secrets_config currently only asserts that mock_config_manager.get_raw_file_content was called once but doesn't verify the argument; update test_get_secrets_config to assert the call used the 'secrets' key by replacing or supplementing the existing call assertion with mock_config_manager.get_raw_file_content.assert_called_once_with('secrets') so the endpoint is validated to request the 'secrets' file (referencing the test_get_secrets_config function and mock_config_manager.get_raw_file_content).plugin-repos/of-the-day/web_ui/file_manager.html (2)
1001-1007: Minor edge case informatFileSizefor very large files.The function handles zero correctly but has edge cases:
- Negative values would produce
NaNfromMath.log()- Files larger than ~1GB would exceed the
sizesarray boundsFor JSON config files this is unlikely, but a small guard improves robustness:
🔧 Optional hardening
function formatFileSize(bytes) { - if (bytes === 0) return '0 Bytes'; + if (!bytes || bytes <= 0) return '0 Bytes'; const k = 1024; - const sizes = ['Bytes', 'KB', 'MB']; - const i = Math.floor(Math.log(bytes) / Math.log(k)); + const sizes = ['Bytes', 'KB', 'MB', 'GB']; + const i = Math.min(Math.floor(Math.log(bytes) / Math.log(k)), sizes.length - 1); return Math.round(bytes / Math.pow(k, i) * 100) / 100 + ' ' + sizes[i]; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/web_ui/file_manager.html` around lines 1001 - 1007, The formatFileSize function should guard against negative inputs and index overflow: in formatFileSize, check if bytes is null/undefined or bytes < 0 and return a safe value (e.g., '0 Bytes' or normalize to Math.abs(bytes)), extend the sizes array (e.g., add 'GB','TB', etc.) or clamp the computed index i to the last element to avoid accessing sizes[i] out of bounds, and ensure i is at least 0 before using it; update the logic around the i calculation and sizes lookup in formatFileSize to implement these guards.
1025-1031: TheescapeJsStringfunction addresses the XSS concern but could be more comprehensive.The implementation handles the common escape sequences (backslash, single quote, newline, carriage return). However, for defense-in-depth, consider also escaping:
- Backticks (
`) - relevant if the escaped string could ever be used in template literals- HTML-breaking sequences like
</script>patterns🛡️ Optional enhancement for more robust escaping
function escapeJsString(text) { return String(text) .replace(/\\/g, '\\\\') .replace(/'/g, "\\'") + .replace(/`/g, '\\`') .replace(/\n/g, '\\n') - .replace(/\r/g, '\\r'); + .replace(/\r/g, '\\r') + .replace(/<\//g, '<\\/'); }Since the current usage is within single-quoted onclick attributes, the existing implementation is sufficient for the immediate use case.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugin-repos/of-the-day/web_ui/file_manager.html` around lines 1025 - 1031, The escapeJsString function currently replaces backslashes, single quotes, newlines and carriage returns but should also neutralize backticks and closing script tags for defense-in-depth; update escapeJsString to additionally replace the backtick character (`) with an escaped sequence and to safely escape any "</script>" (case-insensitive) by replacing "</" with "<\/" or specifically replacing "</script>" with "<\/script>" (and consider using a case-insensitive replacement) so injected template literals or script-terminating sequences cannot break out of the single-quoted onclick context.web_interface/blueprints/api_v3.py (1)
4771-4788: Consolidate duplicated empty-secret filters.
_drop_empty_secretsand_filter_empty_secretsare functionally identical; keeping both increases maintenance risk.♻️ Suggested simplification
- # Filter out empty-string secret values so that the masked empty strings - # returned by the GET endpoint don't overwrite existing saved secrets. - def _filter_empty_secrets(d): - filtered = {} - for k, v in d.items(): - if isinstance(v, dict): - nested = _filter_empty_secrets(v) - if nested: - filtered[k] = nested - elif v is not None and not (isinstance(v, str) and v.strip() == ''): - filtered[k] = v - return filtered - secrets_config = _filter_empty_secrets(secrets_config) + # Reuse the same helper to keep behavior consistent and avoid duplication. + secrets_config = _drop_empty_secrets(secrets_config)As per coding guidelines: "Prefer clear, readable code over clever optimizations (Simplicity First principle)".
Also applies to: 4799-4812
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web_interface/blueprints/api_v3.py` around lines 4771 - 4788, The two functions _drop_empty_secrets and _filter_empty_secrets are duplicates; remove one and consolidate callers to use the remaining function to avoid maintenance drift. Choose one canonical name (e.g., _filter_empty_secrets or _drop_empty_secrets), keep its implementation (the recursive removal of None/blank-string/whitespace entries), delete the other function, and update all references in this module (including the current assignment secrets_config = _drop_empty_secrets(secrets_config) and the other occurrence around lines 4799-4812) to call the chosen function name so behavior remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugin-repos/of-the-day/web_ui/file_manager.html`:
- Line 3: The HTML includes a CDN link to JSONEditor using the floating tag
"@latest" which risks breaking changes; update the <link> tag in
file_manager.html (the JSONEditor stylesheet import) to reference a specific,
tested version (e.g., replace jsoneditor@latest with jsoneditor@X.Y.Z) and, if
possible, add Subresource Integrity (integrity="sha384-...") and
crossorigin="anonymous" attributes to the tag to improve security and
reproducible builds.
In `@src/display_manager.py`:
- Around line 418-420: The except block catching font load failures (the except
Exception as font_err: handling for loading the 4x6 BDF font and assignment to
self.bdf_4x6_font) currently logs only a formatted message and loses the
traceback; replace the logger.error call with a call that preserves the
traceback (e.g., logger.exception(...) or logger.error(..., exc_info=True)) so
the full stack trace is included, and leave the fallback assignment to
self.bdf_5x7_font/self.small_font unchanged.
In `@web_interface/blueprints/api_v3.py`:
- Around line 1650-1659: The code currently calls int(duration) and uses it to
compute config_ttl which will raise on non-numeric inputs or allow non-positive
TTLs; update the logic around config_ttl/config_payload/cache.set to first
validate that duration is a numeric, positive value (e.g., parse safely with a
try/except or str.isdigit check), clamp it into the allowed range (minimum 1,
maximum 3600), and fall back to the default 3600 when validation fails; ensure
the computed config_ttl is always an int >0 before using it in the expires_at
calculation and passing to cache.set ('display_on_demand_config') so no 500s
occur.
---
Duplicate comments:
In `@web_interface/blueprints/api_v3.py`:
- Line 5207: The functions execute_plugin_action and authenticate_spotify
contain local "import sys" statements which shadow the module-level sys
reference (used earlier as sys.executable) and can cause UnboundLocalError;
remove the function-local "import sys" lines and rely on a single module-level
"import sys" (or add one at top of the file) so that references like
sys.executable and any other sys.<attr> inside execute_plugin_action and
authenticate_spotify resolve to the module-level sys rather than a local
variable.
- Around line 2627-2650: Ensure the GET path fails closed when secret masking
cannot be applied: around the plugin secret-masking block that uses
schema_mgr.load_schema(plugin_id) and the helper _mask_secret_fields, if
schema_mgr is falsy, schema_for_mask is None/missing 'properties', or an
exception (mask_err) occurs, do not return the unmasked plugin_config — instead
abort/raise an error (HTTP 400/500 or appropriate exception) so the request
fails rather than leaking secrets; update the try/except and the precondition
checks for schema_mgr and schema_for_mask to enforce this fail-closed behavior
and surface a clear error referencing plugin_id.
---
Nitpick comments:
In `@plugin-repos/of-the-day/manager.py`:
- Around line 482-493: _display_title is loading fonts directly from disk each
call (using ImageFont.truetype and getattr on display_manager fonts) causing
extra I/O on Raspberry Pi; change it to call the existing cached _get_fonts()
helper (which _display_content already uses) to obtain title_font and body_font,
replace the ImageFont.truetype and the getattr fallback logic in _display_title
with the fonts returned by _get_fonts(), and keep the same fallback to
display_manager.small_font or ImageFont.load_default() if the cached result is
missing.
- Around line 800-804: The cleanup method in the class overrides
BasePlugin.cleanup but never calls the base-class teardown; update the cleanup
implementation in cleanup() to first (or last) invoke super().cleanup() so any
BasePlugin.cleanup() logic runs, then clear self.current_items and
self.data_files and log via self.logger.info as before; reference the cleanup()
method and BasePlugin.cleanup() to locate where to add the super() call.
In `@plugin-repos/of-the-day/web_ui/file_manager.html`:
- Around line 1001-1007: The formatFileSize function should guard against
negative inputs and index overflow: in formatFileSize, check if bytes is
null/undefined or bytes < 0 and return a safe value (e.g., '0 Bytes' or
normalize to Math.abs(bytes)), extend the sizes array (e.g., add 'GB','TB',
etc.) or clamp the computed index i to the last element to avoid accessing
sizes[i] out of bounds, and ensure i is at least 0 before using it; update the
logic around the i calculation and sizes lookup in formatFileSize to implement
these guards.
- Around line 1025-1031: The escapeJsString function currently replaces
backslashes, single quotes, newlines and carriage returns but should also
neutralize backticks and closing script tags for defense-in-depth; update
escapeJsString to additionally replace the backtick character (`) with an
escaped sequence and to safely escape any "</script>" (case-insensitive) by
replacing "</" with "<\/" or specifically replacing "</script>" with
"<\/script>" (and consider using a case-insensitive replacement) so injected
template literals or script-terminating sequences cannot break out of the
single-quoted onclick context.
In `@plugin-repos/web-ui-info/manager.py`:
- Around line 304-305: Replace the silent "except Exception: pass" in
plugin-repos/web-ui-info/manager.py with a logged error so cleanup failures are
visible; locate the bare except block (the exact "except Exception: pass") and
call the module/class logger (e.g. self.logger.exception(...) or
logger.exception(...)/logging.getLogger(__name__).exception(...)) with a clear
message like "Fallback cleanup failed in <method_name>" and include the
exception details; ensure you do this once in that catch path (don't re-raise)
so remote troubleshooting on devices can see the failure.
In `@test/test_config_validation_edge_cases.py`:
- Around line 33-58: Combine the three duplicate tests into a single
parametrized pytest function: create a new test like test_invalid_json_variants
that is decorated with `@pytest.mark.parametrize` and supplies the three different
file contents (invalid syntax "{ invalid json }", truncated '{"plugin":
{"enabled": true', and empty ""). In the test body, write the param content to a
temp config_file, instantiate ConfigManager(config_path=str(config_file)), and
assert with pytest.raises(ConfigError) when calling ConfigManager.load_config();
reference the existing test names (test_invalid_json_syntax,
test_truncated_json, test_empty_config_file) when replacing them so reviewers
can find the consolidated test easily.
In `@test/test_web_api.py`:
- Around line 146-153: The test test_get_secrets_config currently only asserts
that mock_config_manager.get_raw_file_content was called once but doesn't verify
the argument; update test_get_secrets_config to assert the call used the
'secrets' key by replacing or supplementing the existing call assertion with
mock_config_manager.get_raw_file_content.assert_called_once_with('secrets') so
the endpoint is validated to request the 'secrets' file (referencing the
test_get_secrets_config function and mock_config_manager.get_raw_file_content).
In `@web_interface/blueprints/api_v3.py`:
- Around line 4771-4788: The two functions _drop_empty_secrets and
_filter_empty_secrets are duplicates; remove one and consolidate callers to use
the remaining function to avoid maintenance drift. Choose one canonical name
(e.g., _filter_empty_secrets or _drop_empty_secrets), keep its implementation
(the recursive removal of None/blank-string/whitespace entries), delete the
other function, and update all references in this module (including the current
assignment secrets_config = _drop_empty_secrets(secrets_config) and the other
occurrence around lines 4799-4812) to call the chosen function name so behavior
remains unchanged.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
CLAUDE.mdplugin-repos/of-the-day/manager.pyplugin-repos/of-the-day/web_ui/file_manager.htmlplugin-repos/web-ui-info/manager.pysrc/display_manager.pytest/__init__.pytest/test_config_validation_edge_cases.pytest/test_display_controller.pytest/test_web_api.pyweb_interface/blueprints/api_v3.py
✅ Files skipped from review due to trivial changes (1)
- test/init.py
| @@ -0,0 +1,1049 @@ | |||
| <div class="file-manager-container" onclick="event.stopPropagation();"> | |||
| <!-- JSONEditor CSS --> | |||
| <link href="https://cdn.jsdelivr.net/npm/jsoneditor@latest/dist/jsoneditor.min.css" rel="stylesheet" type="text/css"> | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the file_manager.html file and check its CDN references
find . -name "file_manager.html" -type fRepository: ChuckBuilds/LEDMatrix
Length of output: 115
🌐 Web query:
JSONEditor npm package latest version and releases
💡 Result:
jsoneditor (npm package) — latest version + recent releases
- Latest published version (tag
latest):10.4.2(jsdelivr.com) - Most recent release date: Oct 15, 2025 (v10.4.2) (releasealert.dev)
Recent release history (most recent first)
- v10.4.2 — Oct 15, 2025 (releasealert.dev)
- v10.4.1 — Aug 28, 2025 (releasealert.dev)
- v10.4.0 — Aug 22, 2025 (releasealert.dev)
- v10.3.0 — Aug 1, 2025 (releasealert.dev)
- v10.2.0 — Mar 28, 2025 (releasealert.dev)
Install
npm i jsoneditor@10.4.2Citations:
- 1: https://www.jsdelivr.com/package/npm/jsoneditor?utm_source=openai
- 2: https://releasealert.dev/github/josdejong/jsoneditor?utm_source=openai
- 3: https://releasealert.dev/github/josdejong/jsoneditor?utm_source=openai
- 4: https://releasealert.dev/github/josdejong/jsoneditor?utm_source=openai
- 5: https://releasealert.dev/github/josdejong/jsoneditor?utm_source=openai
- 6: https://releasealert.dev/github/josdejong/jsoneditor?utm_source=openai
- 7: https://releasealert.dev/github/josdejong/jsoneditor?utm_source=openai
🏁 Script executed:
# Read the file_manager.html to see current CDN references
cat -n ./plugin-repos/of-the-day/web_ui/file_manager.html | head -430 | tail -430Repository: ChuckBuilds/LEDMatrix
Length of output: 16166
Pin the JSONEditor version instead of using @latest.
Using @latest can cause unexpected breaking changes when the library releases new versions. Pin to a specific stable version for reproducible builds:
- <link href="https://cdn.jsdelivr.net/npm/jsoneditor@latest/dist/jsoneditor.min.css" rel="stylesheet" type="text/css">
+ <link href="https://cdn.jsdelivr.net/npm/jsoneditor@10.4.2/dist/jsoneditor.min.css" rel="stylesheet" type="text/css">- <script src="https://cdn.jsdelivr.net/npm/jsoneditor@latest/dist/jsoneditor.min.js"></script>
+ <script src="https://cdn.jsdelivr.net/npm/jsoneditor@10.4.2/dist/jsoneditor.min.js"></script>For enhanced security, consider adding Subresource Integrity (SRI) hashes, though this is less critical for a local-only web interface.
Also applies to: 426-426
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugin-repos/of-the-day/web_ui/file_manager.html` at line 3, The HTML
includes a CDN link to JSONEditor using the floating tag "@latest" which risks
breaking changes; update the <link> tag in file_manager.html (the JSONEditor
stylesheet import) to reference a specific, tested version (e.g., replace
jsoneditor@latest with jsoneditor@X.Y.Z) and, if possible, add Subresource
Integrity (integrity="sha384-...") and crossorigin="anonymous" attributes to the
tag to improve security and reproducible builds.
| except Exception as font_err: | ||
| logger.error(f"Failed to load 4x6 BDF font: {font_err}") | ||
| self.bdf_4x6_font = getattr(self, 'bdf_5x7_font', self.small_font) |
There was a problem hiding this comment.
Preserve traceback on 4x6 font load failures.
At Line 419, logger.error(...) drops stack context. For Pi-side debugging, this makes root-cause analysis harder.
🔧 Suggested fix
- except Exception as font_err:
- logger.error(f"Failed to load 4x6 BDF font: {font_err}")
+ except Exception as font_err:
+ logger.exception(f"Failed to load 4x6 BDF font: {font_err}")
self.bdf_4x6_font = getattr(self, 'bdf_5x7_font', self.small_font)As per coding guidelines, "Include stack traces in error logs to provide complete error context for debugging."
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 418-418: Do not catch blind exception: Exception
(BLE001)
[warning] 419-419: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/display_manager.py` around lines 418 - 420, The except block catching
font load failures (the except Exception as font_err: handling for loading the
4x6 BDF font and assignment to self.bdf_4x6_font) currently logs only a
formatted message and loses the traceback; replace the logger.error call with a
call that preserves the traceback (e.g., logger.exception(...) or
logger.error(..., exc_info=True)) so the full stack trace is included, and leave
the fallback assignment to self.bdf_5x7_font/self.small_font unchanged.
| config_ttl = min(3600, int(duration)) if duration else 3600 | ||
| config_payload = { | ||
| 'plugin_id': resolved_plugin, | ||
| 'mode': resolved_mode, | ||
| 'duration': duration, | ||
| 'pinned': pinned, | ||
| 'requested_at': now, | ||
| 'expires_at': now + config_ttl, | ||
| } | ||
| cache.set('display_on_demand_config', config_payload, ttl=config_ttl) |
There was a problem hiding this comment.
Validate duration before int() conversion to avoid 500s.
Line 1650 can throw on non-numeric input and can accept non-positive TTL values.
🔧 Suggested fix
- config_ttl = min(3600, int(duration)) if duration else 3600
+ config_ttl = 3600
+ if duration is not None:
+ try:
+ duration_seconds = int(duration)
+ except (TypeError, ValueError):
+ return jsonify({'status': 'error', 'message': 'duration must be a positive integer'}), 400
+ if duration_seconds <= 0:
+ return jsonify({'status': 'error', 'message': 'duration must be greater than 0'}), 400
+ config_ttl = min(3600, duration_seconds)As per coding guidelines: "Validate inputs and handle errors early (Fail Fast principle)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/api_v3.py` around lines 1650 - 1659, The code
currently calls int(duration) and uses it to compute config_ttl which will raise
on non-numeric inputs or allow non-positive TTLs; update the logic around
config_ttl/config_payload/cache.set to first validate that duration is a
numeric, positive value (e.g., parse safely with a try/except or str.isdigit
check), clamp it into the allowed range (minimum 1, maximum 3600), and fall back
to the default 3600 when validation fails; ensure the computed config_ttl is
always an int >0 before using it in the expires_at calculation and passing to
cache.set ('display_on_demand_config') so no 500s occur.
Summary
A collection of bug fixes across the web interface, plugin action API, and the of-the-day plugin, plus expanded developer documentation.
fix(web): mask x-secret fields in GET, skip empty secrets on POST
GET /api/v3/plugins/<id>/confignow recursively blanks any field marked"x-secret": truein the plugin's schema before returning, so live API keys are never sent to the browser in plaintext.POST /api/v3/plugins/<id>/confignow skips saving empty-string values forx-secretfields. Previously, re-submitting the config form (which received blanked values from GET) would silently overwrite the stored secret with"".x-widget: "custom-html"in the plugin config template so plugins like of-the-day can embed custom file-manager UIs inside the standard config panel.web_ui_actions(those without atitle) are now hidden from the Plugin Actions section — they are API-only endpoints used by custom widgets.config_secrets.template.jsonto use the plugin_id key ("ledmatrix-weather") instead of the short alias ("weather"); the config_manager deep-merges by plugin_id so the short alias never reached the plugin.fix(api): replace fragile wrapper script with direct stdin for plugin actions
The plugin action endpoint previously generated a temporary Python wrapper script that embedded JSON params as a Python literal in source. This broke on large payloads or strings containing quotes/backslashes.
Fix: pipe params directly to the plugin script via
stdin(subprocess.Popen+communicate(input=json.dumps(params))), eliminating the wrapper file entirely.Also fixed: a shadowed
loggervariable that causedUnboundLocalErroron action failures, and a leftoveros.unlink(wrapper_path)call in the timeout handler that referenced a variable that no longer existed (would raiseNameError).fix(of-the-day): auto-register category in config on first toggle
toggle_category.pyreturned HTTP 400 "Category not found in config" for any JSON file present on disk but absent fromconfig.jsonunderof-the-day.categories(e.g. manually-placed files or files added before the upload workflow ranadd_category_to_config).list_files.pyalready displayed such files withenabled=Trueas a default, so the toggle switch was visible but non-functional.Fix: if the category is missing from config when a toggle request arrives, auto-register it (deriving
data_fileanddisplay_namefrom the filename,enabled=True) and append it tocategory_order— mirroring whatupload_file.pydoes — then apply the requested enabled state.fix(of-the-day): fix file upload button and edit modal closing immediately
Two separate UI issues:
type="button"into the DOM. When the modal lived inside the HTMX plugin config<form>, clicking any JSONEditor toolbar button accidentally submitted the form and dismissed the modal. Fix: move#edit-modaland#delete-modaltodocument.bodyon widget init so they are outside the form entirely. AninitModals()cleanup pass removes stale modals on re-injection.event.stopPropagation()on the file-manager container to prevent clicks from bubbling up to the HTMX form.fix(of-the-day): load data files for categories missing from categories config
_load_data_files()only iteratedself.categories.items(), so any category listed incategory_orderbut absent from thecategoriesdict (e.g.word_of_the_day,bible_verse_of_the_day) was silently skipped, leavingcurrent_itemsempty and causing the plugin to show "No Data".Fix: build a unified load list that merges
self.categorieswith any entries inself.category_orderthat lack a config entry, auto-deriving thedata_filepath asof_the_day/<category_name>.json. Explicit config entries always take priority; auto-discovered ones default toenabled=True.docs: expand CLAUDE.md with comprehensive codebase reference
Added a detailed AI-assistant reference covering directory structure, running modes, config layout, plugin lifecycle/contract, Vegas mode integration, web interface patterns, logging conventions, testing patterns, coding standards, and common pitfalls.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores