feat(connector): implement multitransport bootstrapping handshake#1098
feat(connector): implement multitransport bootstrapping handshake#1098Greg Lamberson (glamberson) wants to merge 1 commit intoDevolutions:masterfrom
Conversation
|
I really need feedback on this so I'll take the draft status off and see where we get. |
|
Hi! Good job. I need a little bit more time to dig into this and craft a well thought answer. I’ll come back to you in the next 24h. |
Thanks! Great! Yes this is pretty important so I knew it would take some time. Much appreciated. |
3585d1b to
6ab896d
Compare
|
Hi! I’m sorry for the delay. Monday is a holiday for me, so I may only come back to you on Tuesday! FYI, I think I may be leaning on b), because the overall architecture stays "sans-IO", it can be thought as of "state-machine responsibility" where the “pause” is part of the protocol state machine. If the state machine consumes the bytes that triggered the pause, it could thus own deferring/replaying them; we avoid duplicating subtle handshake logic in every driver. Otherwise we’ll need the same “connect_finalize must stash last PDU and replay it later” logic in ironrdp-async, ironrdp-blocking, and any third-party driver using the connector directly. Last, this makes the API more misuse resistant, because with a) it’s easy for the caller to forget to buffer, or buffer the wrong input (e.g. after a faulty framing), and so on. This also keeps the API consistent with the existing should_perform_X / complete_X, etc. Once we call complete_multitransport() or skip_multitransport(), the connector can immediately continue processing without requiring extra steps. But I’m still thinking a bit on this, and double checking other parts of the spec. |
That's fine. I relocated for now back to Illinois in the USA, and I'm just now getting up and running here. Regarding your inclinations, (b) makes sense. Keeps replay logic in one place and the driver API stays clean. I'll rework to buffer the Demand Active internally once you confirm. Two questions that would help me get the rework right:
No rush, happy to wait until Tuesday. Greg |
7815daa to
c9947d4
Compare
There was a problem hiding this comment.
Pull request overview
Implements the multitransport bootstrapping portion of the RDP connector state machine by collecting optional server Initiate Multitransport Request PDU(s) after licensing and exposing an application-driven pause point before proceeding to capabilities exchange.
Changes:
- Added
MultitransportBootstrappingrequest collection and a newMultitransportPendingpause state. - Introduced a connector API for multitransport handling:
should_perform_multitransport(),multitransport_requests(),complete_multitransport(), andskip_multitransport(). - Updated
next_pdu_hint()andstep()logic to route post-licensing traffic into the multitransport flow.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
|
Coming back to the design discussion from February. Two questions are still open and they affect how I address the new Copilot review comments, so I'd like to settle them first.
If you confirm (b) on both points I'll rework accordingly. The detection logic (distinguishing multitransport from Demand Active via BasicSecurityHeader flags) and the state extraction pattern are independent of this choice and I can address those either way. |
Makes MultitransportBootstrapping functional: reads the server's optional Initiate Multitransport Request PDUs (0, 1, or 2), then pauses in MultitransportPending for the application to establish UDP transport before proceeding to capabilities exchange. Follows the existing should_perform_X() / mark_X_as_done() pattern used by TLS upgrade and CredSSP.
c9947d4 to
89ff5d6
Compare
|
I went ahead and forged ahead despite my last comment. If you want to revise jsut let me know. In summary:
Thanks. |
This is a design proposal — looking for feedback on the API approach before wiring up the async/blocking drivers.
Makes the MultitransportBootstrapping state functional instead of a no-op pass-through.
After licensing, the server may send 0, 1, or 2 Initiate Multitransport Request PDUs before starting capabilities exchange. This PR reads those PDUs by peeking at the BasicSecurityHeader flags (SEC_TRANSPORT_REQ), then pauses in a new MultitransportPending state so the application can establish UDP transport (RDPEUDP2 + TLS + RDPEMT) or decline.
The API follows the existing should_perform_X() / mark_X_as_done() pattern used by TLS upgrade and CredSSP:
Open question: the Demand Active PDU that signals the end of multitransport bootstrapping arrives while we're still in MultitransportBootstrapping. When the connector transitions to MultitransportPending, this PDU needs to be buffered and re-fed after the application completes. Two options:
(a) The driving code (ironrdp-async/ironrdp-blocking) buffers the PDU externally and re-feeds it. This keeps the connector stateless w.r.t. buffering but requires changes to connect_finalize().
(b) The connector buffers the PDU internally in MultitransportPending and replays it when complete_multitransport() / skip_multitransport() is called. This is self-contained but adds buffer state to the connector.
I've gone with (a) in this draft — the connector doesn't buffer. Feedback on which approach is preferred would be helpful before wiring up the async/blocking drivers.
Builds on: #1091
Related: #140