Conversation
|
Love the approach and structure of this PR I'll bring some suggestions for those lovely tables |
|
@hanna-paasivirta can you raise a small PR against lightning with your suggestion? |
| # Shared status message pools for user-facing progress indicators. | ||
| # Services compose from these to build context-specific pools. | ||
|
|
||
| STATUS_REVIEWING_WORKFLOW = [ |
There was a problem hiding this comment.
Lovely approach, well done
| status = random.choice(STATUS_REVIEWING_WORKFLOW + STATUS_PLANNING) | ||
| else: | ||
| status = random.choice(STATUS_NEW_WORKFLOW + STATUS_PLANNING) | ||
| stream_manager.send_thinking(status) |
There was a problem hiding this comment.
One possible optimisation here:
rather than picking a choice and then calling send_thinking, what if you passed send_thinking a list and it picked at random?
It would just make the streaming code slightly easier to write (which in turn will hopefully encourage more and better streaming updates)
| names = [n for n in names if n] | ||
| if len(names) == 1: | ||
| status = f"Writing code for \"{names[0]}\" step..." | ||
| elif len(names) == 2: |
There was a problem hiding this comment.
It's a bit difficult to do this when generating many steps in parallel.
I think I'd prefer to just say "generating for step X" or just a flat "generating job code". The and is a bit cumbersome. And what would this look like if it's generating 4 steps at once? With long step names? I can't see it scaling
There was a problem hiding this comment.
I tried just the first step, and it's confusing when the other steps appear in the workflow apparently without having been worked on. I also think it's quite satisfying seeing the different steps being named and worked on at once. I've replaced it to just be commas now. What do you think?
There was a problem hiding this comment.
commas will scale better. Let's see how it goes
| job_key = inputs.get("job_key") | ||
| display_name = self._display_name_for_job(job_key) | ||
| if display_name: | ||
| return f"Writing code for \"{display_name}\" step..." |
There was a problem hiding this comment.
is the word step here a bit weird and confusing?
I think Writing code for "Fetch Patients" is pretty clear. If we really need the step` key word in there, I'd do it before (not after) the step name
| status = random.choice(STATUS_REVIEWING_CODE) | ||
| else: | ||
| status = random.choice(STATUS_NEW_CODE) | ||
| stream_manager.send_thinking(status) |
There was a problem hiding this comment.
Can we not add any more context to the job chat? Like "loading adaptor docs"? I guess it's mostly all done in one LLM call.
What about process_stream_event? Can we do more there? Or at least cycle different synonyms for "generating code"?
I don't want to spend ages on this, just probing it a bit to see if we can add a bit more texture and movement to the updates
There was a problem hiding this comment.
I removed the adaptor docs because it's quite annoying seeing that in a turn when you ask an unrelated question. I've added one more status pool STATUS_WORKING that covers the prompt building steps. And expanded the STATUS_REVIEWING_CODE pool because users would see that more often than the STATUS_NEW_CODE.
Short Description
Fixes #443
Requires a small tweak in Lightning to change the first status from "Generating code" to "Thinking" at minimum. Maybe a persistent event list in the future. OpenFn/lightning#4630
Implementation Details
Streaming status messages
All user-facing status messages shown during request processing. Status pools are defined once in
streaming_util.pyand composed by each service.Frontend (Lightning)
Planner (via global agent)
Initial status (random from pool):
Tool execution statuses (fixed):
search_documentationcall_workflow_agentcall_workflow_agentcall_job_code_agentcall_job_code_agentinspect_job_codeJob display names are resolved from the YAML
namefield first, then fall back to title-casing the key (e.g.fetch-patients→ "Fetch Patients").workflow_chat (direct via router)
Initial status (random from pool):
Fixed status at end:
job_chat (direct via router)
Initial status (random from pool):
Conditional status (only shown when docs are actually needed):
Example narratives
Global agent, new workflow with multiple jobs:
Global agent, editing existing workflow:
Direct workflow_chat, new workflow:
Direct workflow_chat, existing workflow:
Direct job_chat, new code, needs docs:
Direct job_chat, editing code, simple request (no docs needed):
AI Usage
Please disclose how you've used AI in this work (it's cool, we just want to know!):
You can read more details in our Responsible AI Policy