feat(system): scrape playbooks, people, teams and playbook roles#1900
feat(system): scrape playbooks, people, teams and playbook roles#1900
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughUpdated Changes
Sequence Diagram(s)mermaid Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
d2cca4f to
9e635bc
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@scrapers/system/system.go`:
- Around line 149-169: scrapeAccessEntities currently calls scrapePeople and
scrapeTeams and calls SetError twice which overwrites the first error; change
the logic to capture each call's error separately (e.g., errPeople, errTeams),
only assign ExternalUsers/ExternalGroups when their respective call succeeded,
and then set result.Error once: if both errPeople and errTeams are non-nil
wrap/aggregate them (use fmt.Errorf with %w or errors.Join) into a single error
and assign it to result via SetError (or fail fast by returning immediately on
the first error if you prefer that behavior); reference the scrapeAccessEntities
function and the SetError/ScrapeResult, ExternalUsers, ExternalGroups,
scrapePeople and scrapeTeams symbols when implementing the change.
- Around line 31-32: Guard against a nil persisted ID from
ctx.ScrapeConfig().GetPersistedID() before converting with lo.FromPtr and
calling scrapeAccessEntities: fetch persistedID :=
ctx.ScrapeConfig().GetPersistedID(), check if persistedID != nil, and only then
compute scraperID := lo.FromPtr(persistedID) and call results = append(results,
scrapeAccessEntities(ctx, scraperID)); do not call lo.FromPtr or
scrapeAccessEntities when persistedID is nil to avoid producing uuid.Nil.
🧹 Nitpick comments (1)
scrapers/system/system.go (1)
37-64: NilLastSeen→ agent always marked offline — verify this is intentional.When
agent.LastSeenisnil,lo.FromPtrreturns the zerotime.Time, makingtime.Since(...)extremely large and the agent "unhealthy/offline". If newly-registered agents legitimately have a nilLastSeen, they'd appear unhealthy immediately. If that's the desired behavior, a brief comment would help future readers.
952f817 to
d6da4ed
Compare
Extend the system scraper to populate access-related tables: - Scrape non-CRD playbooks as MissionControl::Playbook config items - Map people to external_users - Map teams to external_groups - Create playbook:run and playbook:approve external roles Refactor Scrape() into smaller focused functions.
d6da4ed to
f094d08
Compare
Extend the system scraper to populate access-related tables:
Refactor Scrape() into smaller focused functions.
related: flanksource/mission-control-chart#292
Summary by CodeRabbit
New Features
Chores