diff --git a/meetings/2026-03/march-10.md b/meetings/2026-03/march-10.md new file mode 100644 index 0000000..31b3146 --- /dev/null +++ b/meetings/2026-03/march-10.md @@ -0,0 +1,1565 @@ +# 113th TC39 Meeting + +Day One—10 March 2026 + +**Attendees:** + +| Name | Abbreviation | Organization | +|--------------------|--------------|--------------------| +| Aki Rose Braun | AKI | Ecma International | +| Ashley Claymore | ACE | Bloomberg | +| Ben Allen | BAN | Igalia | +| Chengzhong Wu | CZW | Bloomberg | +| Chip Morningstar | CM | Consensys | +| Chris de Almeida | CDA | IBM | +| Dan Lapid | DLP | Cloudflare | +| Daniel Minor | DLM | Mozilla | +| Dmitry Makhnev | DJM | JetBrains | +| Eemeli Aro | EAO | Mozilla | +| Guy Bedford | GB | Cloudflare | +| Istvan Sebestyen | IS | Ecma | +| James Snell | JSL | Cloudflare | +| Jason Williams | JWS | Bloomberg | +| Jeffrey Posnick | JPO | Bloomberg | +| Joe Sepi | JSI | Cloudflare | +| Jonathan Kuperman | JKP | Bloomberg | +| Jordan Harband | JHD | Socket | +| Justin Ridgewell | JRL | Google | +| J. S. Choi | JSC | Invited Expert | +| Keith Miller | KM | Apple | +| Lea Verou | LVU | OpenJS | +| Linus Groh | LGH | Bloomberg | +| Mikhail Barash | MBH | Univ. of Bergen | +| Nicolò Ribaudo | NRO | Igalia | +| Olivier Flückiger | OFR | Google | +| Peter Klecha | PKA | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Richard Gibson | RGN | Agoric | +| Ron Buckton | RBN | F5 | +| Ruben Bridgewater | RBR | Invited Expert | +| Samina Husain | SHN | Ecma International | +| Stephen Hicks | SHS | Google | +| Waldemar Horwat | WH | Invited Expert | +| Yagiz Nizipli | YNP | Cloudflare | +| Daniel Rosenwasser | DRR | Microsoft | +| Jesse Alama | JMN | Igalia | +| Kevin Gibbons | KG | F5 | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Rob Palmer | RPR | Bloomberg | +| Shane Carr | SFC | Google | +| Zibi Braniecki | ZB | | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +Now your video is just a video of the brick wall. + +RPR: That is the podium. I think. So CDA will appear. Can you view 2CQ whilst I'm doing this? Because I won't really have the best. Okay. So we're very nearly ready to go. We're one minute to go time. We think everything is set up except for the microphones, the I think the lapel microphones work, and you need to talk into the pointy end, the end of the stick. However, the regular microphones we've not yet confirmed work. Could someone have a go using one? Maybe Peter? Or Jeff? + +RPR: Testing, testing, testing. + +RPR: Testing. Okay. All right. Seems like the microphones do work. All right. And it is 10:00 AM in New York. So we'll officially say the 113th meeting of TC39 is beginning. Do we have our host, Justin, in the room? I guess he's probably downstairs helping people in. But nevertheless, we will begin. So welcome, everyone, to New York. It's been a while since we've been here last. I definitely remember coming here for the Masonic Lodge. That was a memorable one with Kanye. And it was at Kanye or yeah, it was? Okay. Good. So we're here back again. Today, I will reintroduce you to your chair group. It's remained the same for a while now. And also, we'll be reelecting. We have elections coming up for the chair group, the editors, and the conveners that Samina will handle. So in the room today, we have Rob, myself. We also have coming in from Chicago is Chris. And then on the call, we will have at times Ujjwal. I know Dan Minor is here. And we have as a facilitators, we also have Justin and Daniel Rosenwasser, who is here in the room as well. So this is an amazing turnout. All right. Everyone that's here, even if you're in the room, even if you've been allowed access to the building, please use the entry sign-up form on the reflector. You can find that in the matrix room. It's the headline post, the subject of the room to get to that. But just go to the reflector, and you'll see the main details. On there, scrolling down, I think we should probably make it a bit more prominent as Nicolo says, is the link to the video form. And it's always useful to be in that, particularly if you're presenting. You will need to be signed in in order to get your slides up. And AKI appreciates it too. We use this for attendance tracking, which is an important rule of ECMA meetings. We also have a code of conduct on our website, TC39.es. Please aim please read it. And please aim to behave in a way that respects the spirit of the document. We're not trying to litigate how close to the line you can go. And the summary is, as Bill and Ted put it well, be excellent to each other. Our daily schedule is a bit different to normal. There is a change. So please I heard an echo in the room. So please just recognize that lunch is an hour later than our normal schedule. And we've done previously. So lunch will be between 1:00 and 2:00 PM. We'll also be doing short breaks in the middle of the morning session and the middle of the afternoon session. Is Justin in the room? Does anyone from Google want to talk about the room logistics? What happens if there's a fire? I see a red exit. Please follow the signs. Okay. So Chris corrects me. The break is at 3:00 PM. Does anyone know where the bathrooms are here? Okay. They're left. And all the way around to the left, as you leave the room. Okay. We are using our regular comms tools with a small twist. So we as always, to manage our agenda, we have TCQ. Please use this. If you wish to speak, and also to see what's coming up next, the chairs do our best to reorder the queue dynamically as and when we see things. When you're using TCQ and you're participating, in the tab that shows the participants rather than the entire schedule, you will see these four buttons. When you're speaking, there is a button that says, "I'm done speaking. Please don't use that." The chairs will move things on. And in terms of the buttons, generally, please prefer the buttons on the left because they are lightweight interventions, like a new topic or discussing the current. Please only use the ones on the right, or in particular, the point of order, if you need to stop the meeting. Such as if, for example, the notes have stopped or there's something that's highly pressing and needs to stop the show. We also have our matrix chat for real-time communications. So this is a little bit like Slack, a little bit like Discord. With encryption. Please make use of this. The primary channel for work is TC39 delegates. The primary channel for not work, so for off-topic, for jokes, for things the bot said, is temporal dead zone. TDZ. The mascot of which is an opossum. We have an IPR policy that hopefully everyone has read. The key thing is that the regular form of membership, where you will have already signed this, is being an ECMA member and being an employee or a delegate of an ECMA member company or entity. We also have invited experts. Which are those people who have gone through the process, who have signed the form, been approved by the ECMA secretariat, and then sometimes we also have observers I know that some of those have been listed on the ECMA invite. If in all cases, please make sure that you have signed the royalty-free task group form. And observers are expected to not speak. And to not make a legal contribution to the meeting. All right. I'm going to read out some special words about our notes. The twist with today's meeting is that we do not have our normal transcriptionist, as provided by the third-party company. So instead, we'll be using the built-in note-taking that is built into Google Meet Chris, can you confirm that that's been enabled? + +RPR: No. + +RPR: Okay. We will be asking our hosts who are also the authors of Google Meet or the company that has created it to turn on the transcription there as a backup. But for live notes, we'll be using Kevin Gibbons' Whisper transcription to make up for the lack of our human transcriptionist. I hope that is okay. So a detailed transcript of this meeting is being prepared and will be eventually posted on GitHub. You may edit this at any time during the meeting in Google Docs for accuracy, including any comments, which you do not wish to appear. You may also request corrections or deletions after the fact by editing the Google Doc in the first two weeks after the TC39 meeting or subsequently making the PR in the notes repository or contacting the TC39 chairs. One of the things that we rely on even with the official or the automated transcription is humans to provide attribution, to say who is speaking, to add the notes there. Plus, to make any corrections and so on. It's a fun activity. We always ask for volunteers. Can I ask for volunteers now to begin the morning session? We have a hand up from Jon Kuperman. We have one note-taker. Could we get one more, please? + +RPR: Uh-oh. + +RPR: And ACE as well. Thank you, JKP and ACE. The next meeting we have coming up is hosted by JetBrains. That is in May. It is in Amsterdam. So please fill out the interest survey for that. We have already the chairs have already established there is enough interest for this to go ahead. But it's always good to communicate your intents to other people because there may be people out there who will only go if they see your name on there. So be the attractive party maker that you can be. So very much looking forward to that. And I think AKI may have also done a reminder that it's worth booking up your accommodation travel soon. Because Amsterdam is just naturally a little bit expensive at that time of year. So yeah, strike soon. And we will be posting the invite on the reflector. All right. So let's get to the usual bits. We've done our ask the note-taker. We'll now seek approval for the previous meeting, which was held in January. Are there any objections to approving the notes from the previous meeting? I think I'm hearing no objections. So we can say that the previous meeting notes are approved. Next up, adoption of the current agenda. Are there any objections to adopting the current agenda? + +SHN: RPR? + +RPR: SHN? + +SHN: Can you hear me? + +RPR: I can hear you in the room. I don't know if you're coming through via the lapel. You may need to. + +SHN: I just want to point out that on the agenda, after Secretary's report, everybody should be aware we're going to be voting. It just was not highlighted. + +RPR: Thank you for that clarification. Yes. So after the Secretary's report, we'll be doing the elections for editors, chairs, and conveners. All right. So we've adopted the current agenda. And we will now move to the next item is SHN. With the Secretary's report. + +## Secretary's Report + +Presenter: Samina Husain (SHN) + +* [slides](https://github.com/tc39/agendas/blob/main/2026/tc39-2026-009.pdf) + +SHN: OK, thank you. I'm just going to share my screen. Give me a moment. OK. Very good. First, thank you very much to Google for hosting the meeting. As always, it's wonderful from all our hosts. Thank you for the room and the food and all the entertainment that we'll have as the days go on. Also want to comment on the next plenary coming up in Amsterdam. So thank you to JetBrains for hosting that. You made a comment about booking rooms early. Yes, that's really good. But I must say, I don't know what's more expensive in New York or Amsterdam. I'm still struggling with that one. But maybe it could be New York. + +SHN: All right. I have a very light agenda for the Secretary's report. We met six weeks ago and the start of the year. So it's a bit of a light report. I just want to give an update to remind everybody on the 30th of June, GA, and the approvals of your new or the next editions of the standards. Some new items, the usual annex, and then we will follow that with elections. For the chairs and editors. For the GA approval, as you all know, on June 29, June 30, will be the next General Assembly. It actually will be the 30th or 29th will be a different set of meetings. To make sure that your next editions the 17th and the 13th edition of both your standards are good to go. I had made an error in my previous slides, which I presented in January. It is the end of March that you do need to freeze your specification as you all know. The 60-day opt-out and 60-day review period is what you need prior to the GA, the executive committee meeting will be held on the 31st and 1st of April. 31st of March, 1st of April coming up in about three weeks. So it's always good that your specs are frozen by then so then we also bring it up to the executive committee that they're aware that it's going to be proposed to the General Assembly for approval. And it should be part of your chair's report. Just highlights what's happened over the last weeks. FOSDEM 2026, I think some of you may have been there. Some of you may have been presenting topics related to TC39. There was quite a lot done for TC54, a technical committee that some of you are aware of. Some of you are active. We had a one-day workshop hosted by AboutCode. Talking about PURL, VARS, and Cyclone DX, which is the first and the second edition that was published in December. So it was very good. We had over 100 people in that workshop thanks to AboutCode. And then ECMA sponsored a social event that evening. Right after the workshop which generated a lot of interest not only for ECMA but also standards. The new standards SBOM is very relevant. It may touch some of your other organizations, not necessarily TC39 specifically. But it is a very important topic very relevant for the European regulations. So keep that in your mind as you think of things for SBOM, Cyclone DX, and if you want your organization to participate. It's a very strong committee. We will have a new co-chair in that committee CDA will be the co-chair for TC54 along with Alyssa Wright from Bloomberg. TC57, HLSL, which is the high-level shading language. That will officially start on the 17th of March. With participation from Igalia, Google, and Microsoft. We are looking forward to a few new members to join. But it's also looking to be a very attractive and active technical committee. So please keep your eye on that if that is of interest for your organization. And last on that list is ArkTS, which is a proposal that came in from Huawei, one of our ordinary members while I was in China. I have brought this up in our last TC plenary meeting. I just wanted to bring it up again that this topic hasn't gone away. I am waiting for more details on this topic. And I would like Huawei to provide a more detailed scope of work program of work. So it can be discussed broadly across ECMA. I just wanted to also bring it to everybody's attention here. Because there are many members here and some of you may find this topic of interest. It has not yet moved forward. But I do think it's going to come back to our discussions. Our annexes have all the documents from the General Assembly executive committee and TC meetings. Have a look at that if there are documents that you are interested in and cannot access ask your chairs. Or ask me or Aki. And we'll be able to bring that to your attention. + +RPR: I'm sorry, Samina. + +RPR: Oh, MM has asked where are the instructions for joining the right link. SHS has replied. It's all found on the reflector towards the bottom. Please continue. + +SHN: Yes. And just to comment on that, maybe next time we can make it in bold, it's just all the way on the bottom and sometimes we miss it. Thank you. So just going to the dates I just want to again emphasize the General Assembly dates. Those are very important. The 29th, 30th for your next edition of standards. Then at the December GA you may have perhaps Source Maps that may be considered already at that time. For the ExeCom meeting the date is 31st of March, 1st of April. It was changed. I know that some people didn't have it on the agenda. Just to highlight it has been changed. And has been noted from the very beginning on the ECMA information. All right. That's just the meetings that are coming up. I do want to point out for our General Assembly coming up in June this year is ECMA 65 years anniversary. June 17 is when ECMA was formed. We will do an event on the 29th of June. It is going to be a very specific to invite members of the European Commission, ITU, ISO/IEC, and certain other partners that we've had and liaisons to have a discussion and have a bit of a celebration on that aspects. All the executive committee and ECMA General Assembly executive committee management will be there. We are looking to see what more we'll do. But just to keep your eyes on the fact that we are at 65 years which is quite a special milestone for ECMA. + +SHN: Our next meetings for the TCs you are all aware of. And I think you know where our next meetings are going to be. We're in New York for the 113. I will just move on with the list of documents. I'll leave you to review that. There are many. And then that is the end of my report. Aki, would you like to add any further comments? + +AKI: I don't think I have any further comments for this. If anybody has any interest in any of the TCs that Samina mentioned please don't hesitate to let us know. Also just for the room's knowledge there's an echo when I can hear myself speaking. And it's messing with me. + +SHN: Hey. Thanks, AKI. Just two other comments from my side. + +> Technical interruption while audio issues are resolved + +SHN: Online. Yes. Good. Thank you. Because the next part of the discussion is the voting and I do want it to be very clear. For everybody online so there's no echo. So I don't think online you heard the last words that I shared at the end of my presentation. But again AKI thanks for your comments. I also wanted to just mention that please not to forget to do the sign-in forms, for online AKI you can take the list of the members. That are online and also please remember to do your summaries and conclusions. So next step is the voting. I do have it on my slides if I may share if I may continue to share please. I would appreciate that. I have added it to the back of my slides. So these are the elections that you do every year for TC39. These elections are for your chairs, your editors and your conveners. Can you see that clearly? So this is your slate today. For your chairs, facilitators, conveners, editors, administrator, and your secretaries. Typically you don't vote for your secretaries. For you all. This is your roster today. It has been published already some time back where you saw the new roster. Which are these are the only changes to the new roster. Which were the names are grey highlighted. + +MF: We don't need to officially acknowledge SYG as 50%. We prefer all editors to be elected as equals, and we will manage how we split our time as an internal affair. + +??: RBN nomination came in a little late, looking for consensus that this is okay + +NRO: Yeah. It's a question for the current editor's group. KG, SYG, and MF, do you think it's possible to have too many 262 editors? Or would 6 be fine? + +MF: That has crossed my mind. Because we got such a large set of volunteers. I think we can define internal processes that will work. I know that with three, it was a very easy—if we've gotten two reviews on a thing and it's minor—we have processes for this. With six, we're not going to—we're obviously not going to want to wait for everyone to have reviewed everything. Even for major things, it would probably be restrictive to our pace. But I think that we can define a process that does work for us. + +NRO: Okay. Thank you. + +SHN: Is that answer comfortable for you, NRO, to have a roster of six? + +NRO: Yes. I was mostly curious about what MF’s in this case opinion, given his experience. But I don't see any problem. But also, I don't have much experience with this. + +SHN: Okay. So I come back to my question. And I would appreciate if you tell me from online, are there any concerns regarding having RBN’s name come up at a later stage? No comments. Thank you. There are none. So we move on with the roster as it is. We would like to do this vote on the full slate. So everything that you see in front of you, less the secretaries. Are there any oppositions to voting on the complete slate? Anything online, please? Anything online? Any concerns? Any hands raised? + +RPR: Nothing online. + +### Speaker's Summary of Key Points + +The Secretary’s report noted the preparations for the June GA, including spec‑freeze deadlines and upcoming ExeCom dates. SHN highlighted recent TC activities (notably TC54 and TC57), ongoing work on proposals like ArkTS, and noted ECMA’s upcoming 65th‑anniversary. All meeting dates and documents remain available via the usual channels. + +## Election of the 2026 Chair Group, Convenors, and Editors + +Presenter: Samina Husain (SHN) + +SHN: Okay. Thank you. So then I would move forward with the voting. This is a TC vote. This is for you. And so therefore, I would ask also if there are no further conversations, discussions, questions on this slate that I would move to the vote. And we will vote on the full slate as you see in front of you with the addition of Ron. Are there any persons that have any opposition, any concern regarding the names there? + +RPR: Yeah. Normally, we ask the group being voted on to leave the room so that we can have a discussion so people can more easily bring things up. + +SHN: So we don't in the ExeCom, we don't but if you would like that, then I apologize. + +We've done it previously, so I think we should do that. + +Sorry for the group that's changing. + +Okay. Thank you. So then may I ask the members who are listed here to leave the room, please? If you want, I mean, that's great. How do you do this? + +CDA: Sorry. The group that's changing so for example, last time we did the chairs and facilitators change where we had Daniel and Daniel Minor joining, we all left the room, myself, Rob, Ujjwal, and Daniel and Daniel and Justin all left the room. So in keeping with that precedent and tradition, I guess, we would ask Michael, Kevin, Shu, and Nicolo, Ron, Linus, who's the last one? RGN, to exit. + +KG: I probably don't have thoughts, but I will be turning the transcription bot off. + +> \[notes stop\] + +SHN: Thank you. Congratulations to everybody this slate is unanimously approved. Thank you very much for your support, and you continued work. + +SHN: Sorry. I have to repeat myself. I need NRO on the call. + +NRO: Yes + +SHN: Excellent. Congratulations, NRO. You have 100% unanimous approval for the full slate, and of course, for all of the new work. So congratulations, and thank you for your work. So we've concluded the voting. You may go to the next agenda item. + +### Conclusion + +* Unanimous approval for the full slate, Election of the 2026 Chair Group, Convenors, and Editors. + +## ECMA262 Status Update + +Presenter: Michael Ficarra (MF) + +* [spec](https://github.com/tc39/ecma262) +* [slides](https://docs.google.com/presentation/d/19AryLFu9Crgiq0e-YeKu9Oms9j94mOFWQlXAtawzrWU) + +MF: Okay. All right. I haven't done one of these in forever. KG usually takes the lead. All right. So we have just a couple of normative changes. Since the last meeting, we have DLM's upsert proposal, which adds `getOrInsert` and `getOrInsertComputed` to Maps and WeakMaps. And we have NRO, I think this was a needs-consensus PR, right? Which for diamond import shapes, when one path goes through an import that gets exported and one path goes through an export from, it was considered an ambiguous import. And we wanted to make those have the same behavior. And so it was a fairly minor change. That's received implementations, and now has been merged. + +MF: As for our editorial changes, just one thing to call out. This was a contribution from NRO, I guess, pre-editor. Now when we have these lists of abstract methods on various records in the spec, we have a column next to it that lists all of the concrete definitions within this spec for that abstract method. This makes it easier to discover those things and also the references from those locations, they reference back to this position. So it's integrated with the regular references system. So hopefully, that makes things a little easier for everyone. + +MF: This list (of upcoming changes) is mostly not changed. We have made progress in some of these. We did remove one of them because it was done, but I don't remember what it is because it's not here now. But yeah, it was obviously a more minor editorial thing that was not necessary to communicate. And that's it. + +## ECMA404 Status Update + +Presenter: Chip Morningstar (CM) + +* no slides + +CM: Yeah. So not a lot to report. JSON is sleeping off the excitement from last meeting. + +RPR: Thank you, Chip. Next up. + +## ECMA402 Status Update + +Presenter: Ben Allen (BAN) + +* no slides + +BAN: Yes. Hopefully, I'm not too echoey. So as last time, both RGN and I have had our time to put into other projects, so there are no full updates for 402. + +BAN: Okay. So no meaningful updates for 402. I'd also say, Ben, your audio was weak. I think we could understand you, but it was just a little bit quiet. + +## Test262 Status Update + +Presenter: Richard Gibson (RGN) + +* no slides + +RGN: Okay. We have Furious Progress on the test for Immutable Array Buffer and also for Explicit Resource Management. Both of those, I'm expecting to have some commits land this week. Lots of new coverage for temporal and Intl era and month codes. And a bunch of contributions from someone new to cover gaps in coverage. + +RGN: The final point I wanted to mention is that the initial tests for import bytes have landed. And if anyone is interested in being a test contributor or reviewer, you are always welcome. + +RPR: Thank you, RGN. So more reviewers being solicited there. Always welcome. + +### Speaker's Summary of Key Points + +* Merges are expected this week for Immutable ArrayBuffer and Explicit Resource Management +* Lots of new coverage landed for Temporal and Intl Era/Month Codes +* A new contributor is helping fill in coverage gaps +* Tests for Import Bytes have landed + +### Conclusion + +* Test contributors and reviewers are welcome! + +## TG4 Source Maps + +Presenter: Nicolò Ribaudo (NRO) + +* no slides + +NRO: So this is again going to be a short update for TG4. Since last plenary, we finally got consensus and merged one pull request, number 211, that avoids the ambiguity between different extraction methods for regular expressions. Other than that, most of our work is still focused on the range of things and scopes proposals. They are both still very active. But with spec changes in the proposals and implementations ongoing, maybe one year ago, I mentioned that we were hoping to have a new edition of the spec in this summer 2026. Unfortunately, we're not ready for that. So we are moving the target to December 2026. And that's it. + +## TG5 Status Update + +Presenter: Mikhail Barash (MBH) + +* [slides](https://docs.google.com/presentation/d/1PlpJhx2uqRWpCGbytmN38F2QPeoGz8w4uNuylOg92lw/edit?usp=sharing) + +MBH: Right. So a short update of TG5. We continue to have monthly meetings. At the end of March, the meeting will be about a mechanization of the JavaScript modules algorithms in the Lean Theorem Prover. And just a reminder, we meet last Wednesday of every month at 4:00 PM Central European Time. The next TG5 workshop will be co-located with the plenary in Amsterdam, and will be held at TU Delft. We were unfortunately not able to secure a host for this plenary now in New York. I would also like to bring your attention to the Workshop on Programming Language Standardization and Specification, which will be arranged later this year by MF, YSV, and myself, and will be co-located with SPLASH 2026. The keynote will be given by Gilad Bracha on specifying languages and VMs. And I just briefly show you the list of topics that we are interested in discussing at that workshop. I have highlighted a couple of topics that could be particularly relevant for the TC39 folks in case you want to submit a talk proposal and present at the event. The submission deadline is in the middle of June. And that's it from my side. + +## CoC Committee Status Update + +Presenter: Chris de Almeida (CDA) + +* no slides + +CDA: Nothing. So that's good. Do you want to just talk? Oh, am I audible? Yes, I should be. Oh, code of conduct committee. Yes, no news is good news. No new reports, which we love. As always, if anybody's interested in joining us on the code of conduct committee, please reach out. + +## import defer interaction with tc39/ecma262#3715, and normative PR Shield dynamic import defer against `Promise.prototype` monkey patch + +Presenter: Nicolò Ribaudo (NRO) + +* [PR 1](https://github.com/tc39/proposal-defer-import-eval/pull/77) +* [PR 2](https://github.com/tc39/proposal-defer-import-eval/pull/79) +* [proposal](https://github.com/tc39/proposal-defer-import-eval/) +* [slides](https://docs.google.com/presentation/d/1T1M7X-cS_ODKOl6Nsq-_YmHusVoD6bPtAS_MHzxzgTw) + +NRO: Okay. So yeah, we have two normative changes to present for import defer. The first one is about avoid leaking some internal promises. That are meant to be spec internal, similar accidentally leaking them to user code. The context here is that within the spec, all modules have a thing called the top level capability, which is a promise that represents the module evaluation this was introduced at the time with the top level await, but all modules have it. Not only a sync modules, because this is also represents the evaluation of all the dependencies of a module, which might always be asynchronous. And this is just an internal spec artifact. It's not actually exposed in any way to user code. There is a spec bug in the dynamic form of import defer. Where import defer needs to look through and traverse the module graph to find all of the async dependencies and pre-evaluate them. And so it gets a bunch of it could have multiple of these transitive dependencies that are asynchronous. And it's a wait for all of them to be done. And in the dynamic form of import defer, the way we're waiting for all of those modules to be done is by using the ‘PerformPromiseAll’ AO on the top level capabilities of all of these modules. Now, ‘PerformPromiseAll’ does not just perform the promise then spec operation on the promises. It invokes the .then method on them. And if you monkey patch promise prototype then, it will call your monkey patch function. So you can look at the `this` value and see what the promise is. Which means that by monkey patching promise prototype then, you can get a reference to this otherwise spec internal promise. So that's bad. We should not leak internal spec details. Especially if they're not leaked before. We should not introduce ways of serving otherwise unobservable things. So the solution we're going with is that instead of just using perform promise all, we basically have a copy of that AO that calls perform promise then instead of calling them method on the various objects. So this always uses the actual built-in then steps instead of whatever was installed on there. We might eventually want to fix this problem of promise patching and observing otherwise unobservable things more in general. There will be discussion on this also on the last day of this plenary. But for now, we just want to avoid introducing this problem in import defer in a place where we didn't have the problem before. Oh, and by the way, we found this bug while doing the V8 implementation while doing the V8 implementation of the proposal. So do we have consensus for this change? + +KG: Strong support. And I support doing this wherever we can, ideally in general but if we can't I'm still supportive of doing it piecemeal even if that's not totally consistent. + +NRO: Okay. Thank you. I see also the manner Olivier and Mark Miller with support end of message on the queue. Okay. Thank you. So then we have the second CDA, could you mute again the room? Thank you. So we have done the second normative promise. That is to properly propagate the phase. So the defer in re-export deferred imports. So the context here is that this normative PR that was approved in last meeting for ECMA 362, the change this code at the top here in this slide to behave like the code in the at the bottom of this slide. So making the import and export into states into steps behave like just an export namespace from somewhere. It accidentally changed the behavior of re-exporting deferred namespaces in the import defer proposal. If we just take the updated 362 spec text and apply the current version of the import defer proposal on top of that, we get that this first code that does an import defer of namespace and then exports it actually behaves like doing import defer of the namespace and then doing a plain non-deferred namespace export. This is just how the update combines with the proposal. And so it makes the re-export namespace eager rather than deferred. And so the editorial way of fixing this is to properly propagate the phase there. I'm actually inclined to I was inclined to not consider this to be a normative change. It was just like an accidental bug introduced due to some rebasing, basically. It didn't change. It does not change what we had consensus for in the past. But regardless, given that it is technically normative, I'm here asking for consensus for this. Is there any objection here? + +NRO: Okay. I see the DLM, RGN, MM, and JSL as positive on the as supporting end of message in the queue. So thank you all. And this is it from me. + +RPR: All right. Thank you, NRO. + +### Speaker's Summary of Key Points + +* We presented two normative pull requests for import defer. Fixing two separate spec bugs + +### Conclusion + +* Both changes had consensus. + +## Withdraw Dynamic Import Host Adjustment + +Presenter: Stephen Hicks (SHS) + +* [proposal](https://github.com/tc39/proposal-dynamic-import-host-adjustment) +* no slides + +SHS: This is a really hello. All right. This is a really quick one. This was brought up last meeting was when we were going through the stage two proposals, I think it was. And there wasn't we weren’t sure that anyone cared about it. But we didn't want to throw it out without having actual confirmation. So I spoke with the people at Google who put it in the first place. Everyone's happy letting it go. We're not following up with this anymore. So I guess, yeah, any movement to withdraw it or whatever needs to happen here. + +RPR: So I think you're asking for consensus. + +SHS: Yes. Consensus to withdraw the proposal. + +RPR: Are there any objections to withdrawing dynamic import host adjustment? You have two messages of support. DLM says, "Support withdrawing the proposal." MM says, "Plus one." End of message. + +SHS: Excellent. + +RBR: So I think, yes, got that concludes. We have consensus for withdrawing this proposal. + +SHS: All right. Thank you. + +### Speaker's Summary of Key Points + +* Last meeting uncovered this proposal as stale, but we weren't sure it was okay to withdraw. +* I spoke with our security team who brought the proposal in the first place, and they have no plans to further pursue it. +* Seeking consensus to withdraw. + +### Conclusion + +* Proposal withdrawn + +## Abort Protocol Discussion + +Presenter: James M Snell (JSL) + +* [repo](https://github.com/jasnell/discussion-abort-protocol) +* [Slides](https://github.com/jasnell/discussion-abort-protocol/blob/main/slides-export.pdf) + +JSL: Yeah. Can you all hear me okay? Okay. Let me share my screen here. All right. So in Tokyo, this came up briefly. We were talking about revisiting abort/cancellation. I did not want to bring a proposal yet. I know the committee's talked about this for a number of years. And I kind of want to make sure that there is general agreement on the problem statements. And some of the design principles. Before I actually bring an actual proposal, which if there's agreement here, then I would plan on bringing a start of a proposal in May. So I just want to kind of touch on kind of where we were, where the previous discussion left off, present a problem statement, see if there's some agreement on that. And then go through some of the design principles that would frame the proposal moving forward and see if we can make sure there's agreement on that. Okay. So objectives, like I said, present the problem statement, go through principles, talk about some of the scope. This is a discussion, not a proposal. I'm not asking for any stages today, just to be absolutely crystal clear. The gap: the ecosystem is converged on AbortController abort signal for as the cancellation signal. But these are WHATWG APIs. They're not language level. This is a problem on its own. We can't formatively reference web APIs. That's I think a different discussion that we could probably have eventually. And I think we're trying to find a sock to cover up the mic. That's fine. And of course, not all JavaScript environments actually support the web APIs. It'd be nice if we had something that was supported there. We have no built-in concept of cancellation. And yet the entire ecosystem depends on one. So the proposed problem statement here, abort signaling pattern should be part of the language design so that the existing solutions can be retroactively satisfied with minimal or no changes. I don't anticipate we'll get to having no changes. But let's try to minimize what those changes are. Prior art, we had a conversation back in 2017. RBN, Dominic, and Yehuda brought this up. It's stalled out. I don't have the full context there. I could go back and search for it. But this did come up before. And then again, 2019, RBN brought this up again. Like, "Hey, let's have a protocol-based approach." But that seems to have trailed off as well. So the question is, do we at least have consensus with the group now that we should even pursue this? Before we get into the design principles, I'd like to open it up. If anybody has any thoughts on this, any objections to this, like I think this is actually the more important part. Does anyone think we should not be doing this at the language level? Is the Web API enough? Mark's on the queue. + +MM: Can people hear me? + +MM: So I'm on the fence on this. I understand the pressure from the ecosystem. And certainly, I believe that it's preferable to have something that everyone's going to implement independent of host. The NTC 39 rather than what way. So for all of those reasons, I'm inclined to support the only thing that really holds me back is the synchronous callback. And I saw that it was non-negotiable. So within the proposal, I'm not hoping that that could be negotiated. And I understand since the motivation is compatibility, on those grounds, it's non-negotiable anyway. The unpredictability with regard to having something that occurs at a time that's not in control of the code that's called back into, very much reminds me of Unix signal handlers. Where because of the unpredictability of when the code within the thread gets called, what people have learned over time is the only thing that's safe to do in the signal handler to obviously do a very high first approximation is to set a bit that the program then checks at safe points in the program. And this proposal already has the bit and I think that legacy aside, the synchronous callback is a terrible idea. The asynchronous callback is something that to my mind would be fine. And for the use cases of the synchronous callback, the way those use cases should be coded is to synchronously check the bit at safe points in the program. Do anything during an unpredictable synchronous callback is inherently dangerous and might leave corrupted statement. That's it. That's what I'd say. + +RPR: Thank you, MM. KG has a reply. + +KG: Yeah. It's a good concern. It has not been a problem in practice. It's been almost a decade of experience on the web. + +MM: So if anybody has any insight, offline is fine. insight into why it has not been a problem in practice, whereas it's a fatal problem, for Unix signal handlers, and I understand we're not doing fine-grain multi-threading. But still, I would expect it to be a similar problem. So I'd love anybody's insights. As to why it has not turned out to be a problem in practice. + +KG: The reason this hasn’t been an issue is that a lot of code doesn’t need to use a callback. If you're writing an async function it just does “throwIfAborted” after each await point, or at whatever points it wants. The callback needs to exist for other patterns. + +MM: So the callback hasn't been a problem because people haven't been using the callback. + +JSL: Okay. I do want to say some of this is getting into kind of the design principles. And I'd like to focus on a problem statement and design principles next. So. + +MM: Sure. So KG, let's take this offline. Or into the issue plan. + +JSL: We do have RBN on offering on the queue can provide historical background if necessary. + +RBN: Yeah. So when I started looking into the cancellation protocol or proposal a number of years ago, there had been some discussion about cancellation when TC39 adopted Promise. At the time, DOM was pursuing futures, which was their version of the Promise implementation. And it took a bit of time before we had a solid proposal for cancellation. There had been a lot of discussion back and forth. There were different needs and different interests in cancellation. When I proposed cancellation it was held up for a bit at the time by the fact that there were some members of the committee that were interested in cancellation being ephemeral. That it just carried along somehow with your asynchronous operation and you could just cancel any promise and it would cancel the operations beneath it. That ephemerality had issues. There was the potential for operations that could be in the middle of an execution that shouldn't be canceled, but then suddenly could be because it's just automatic. That would be problematic. So the discussion went back and forth. Because of the delays in discussion, WHATWG went ahead and moved forward with AbortController because they wanted to ship `fetch`. They needed something out there to be able to cancel it, so there was a sentiment at the time that that was the way that it was going to happen. They didn't want to wait on TC39 coming around to that decision. Once AbortController shipped, there were a number of initiatives to try to find ways to interoperate with it. Again, one of the issues you've pointed out with AbortController is that it's very DOM-heavy as far as its implementation. This requires any other platform implementation, such as Node.js, to bring over a lot of DOM context from the specification into something that traditionally didn't use much in the way of DOM primitives to actually support it. So we did look at an approach that used a symbol-based protocol at first, had made a lot of progress in that, and then that petered out. It was difficult to get WHATWG to accept and then at some point WHATWG said it was then only interested in a host hook that would allow you to enlist in DOM cancellation. The reason the proposal kind of petered out is that every avenue that we tried to investigate to interoperate that actually gave some capability to the language that we could actually specify things around was being cut off. And that's where things ended up. + +JSL: Cool. Thank you, RBN. Yeah. Very, very helpful there. So I come back to this. It's been on the agenda before. The cancellation got to stage one. Didn't progress. Do we have or is there any objection to having this having this as a stage one, stage two, or moving forward? Does anyone think we should not have abort signaling protocol in the language?. + +JHD: So I'm based on the previous round of history, around this stuff, I'm not convinced abort signal is the best cancellation mechanism. And if there's a better one, it would be really nice to have that be what's in the language and figure out how to make that interoperate with abort signal. But I don't also have enough context to argue for anything specific. So if there is anyone who believes there's a better pattern, I think that exploring that should be part of this proposal. I think we should in other words, I think we should have Promise cancellation be the thing we are still looking into. And then the solution either be something better than interoperates with abort signal or we should accept that abort signal is what we should be interoperating with. I think either path is a good thing as long as we've we're sure about which path is the better choice. + +JSL: Okay. On the queue, we have Mark then Ron then Kevin, but then Lea just came in with a reply. + +LVU: +1 Stage 1, agree around potential redesign, but that's for later EOM. + +MM: As stated, I'm on the fence, EOM. + +JSL: Let's see. We got Ron. “We can really use cancellation mechanism.” Did you want to say anything more to that? + +MM: Yeah. So I'm sorry. I also put myself on the queue as replying to JHD. Just want to say, legacy aside, I think there are absolutely better cancellation mechanisms. The simply removing the synchronous callback in this one would make it better. The reason why I'm still on the fence is just because of the legacy compatibility. + +RBN: There are a number of proposals that we've adopted over the years where it would be beneficial if we could actually integrate cancellation. The `import`-call mechanism is a prime example of a place where you should be able to cancel that operation just as you can with `fetch`. The same could be said for any `then` continuation, as you might want to be able to remove the callback you enlisted via `then`. Currently, the only way to do this is to have some indicator at the top of your callback that asks, "Should I actually run?” so that it just skips the callback if you want to prevent that operation from happening. We have APIs that we're considering for shared structs which add synchronization primitives that can be used in worker threads to coordinate via blocking, but we might also want to be able to use them on the main thread via asynchronous operations and callbacks. If you're going to wait for an asynchronous event from a background thread that might not happen, you might want to be able to specify a timeout for it. You might want to be able to manually cancel when the user clicks a button. These are all operations that are very difficult to do if these are built-in mechanisms within the specification that we have no mechanism to cancel these operations. So I do think that it's very important that we actually do bring some mechanism for cancellation to TC39. I know I have been on the record as not a huge fan of the AbortController API. In the past, WHATWG was very opposed to there being more than one way to deal with cancellation, which left us with only AbortController. This makes it very difficult for us to adopt a mechanism that is clean and consistent with the ECMAScript specification and our existing APIs aside from finding ways to shoehorn the AbortController mechanism into our approach. So I do think that we should find some solution, whether that is to find a way to adopt a subset of AbortController or adopt a mechanism to adapt AbortController via a protocol. I do think this does need to move forward. I will say that in addition, rather than seeking stage one for this, we should just potentially add you as a co-champion for the Cancellation proposal and find a way to bring that back up to snuff. + +JSL: Yeah. That was actually one of my next questions was whether it needed a new proposal or whether we can continue the original one. All right. Kevin, you're on. + +KG: Yeah. I support this, I'm fine keeping it general although I do also kind of agree we should not have a 2nd cancellation protocol, but wrt JHD's specific suggestion about keeping this general, we should definitely not phrase this as "promise cancellation". + +JHD: Task cancellation or something like that would make perfect sense to me. + +KG: Yeah. Yeah. + +JSL: Okay. So yeah. So Ron's point there about we did have the cancellation proposal before. As far as I know, that was never formally withdrawn. So the question there is, do we just continue that or does this require a new one? I mean, just if anybody thinks that it requires a new effort. Okay. So I'm happy to be added as one of the champions for that to help push it forward. Okay. So that's great. I'm not hearing any objections. Not seeing anything in the queue. No one's throwing anything at me. To say that this should be done… + +MM: RBN is on the queue. + +RBN: I'd like to be part of that discussion offline as well that you're talking about with Kevin regarding sync callbacks. I will say that while working with async code, async functions, and await, you tend to be able to use something like the `throwIfAborted()` operation to throw an exception while you're doing async operations when you're at a convenient place to do that after an `await`. The primary purpose of a synchronous callback is for working with APIs that are not `async..await` that you tend to see in the web when you're working with `setTimeout` because you need to be able to clear that timeout before that operation runs. That's not something you can necessarily `await` on. You need some type of trigger from the system to tell you that you need to do some type of work. Now, that's not saying that those callbacks could not be executed asynchronously. It's just that most approaches to cancellation, especially the API design that I was initially pushing for cancellation proposal, were very heavily influenced by the .NET `CancellationToken` API, which does evaluate those callbacks synchronously. The main reason that those are synchronous is to have cancellation be triggered as early as possible for those events rather than waiting for async ticks or having to wait add something to the queue which in turn must wait for whatever current async code is running, but we can get into more details about those approaches offline. + +JSL: Nice. Okay. Just looking at the time, we've got about nine minutes left here. So I kind of want to talk about some of the design principles only if there's disagreement on them. I don't think we have time to talk through all of them. + +MM: Just a question to RBN. I agree that there are synchronous use cases. My claim is that the only robust way to write those synchronous use cases is to synchronously test a bit at safe points in your program. + +RBN: That isn’t something that works well with `setTimeout` or `requestAnimationFrame`. Those operations are triggered by the host environment, so I might not have any other user code that's running other than these events and thus no “safe point” at which to test this bit. It's generally frowned upon in web development to have an infinite loop that's constantly just checking a bit, so we generally want to avoid that. Even if it's async, you'll still be starving your microtask queue because you're constantly adding new awaits to the callbacks and it's just wasting time. So in those cases, it's better to be notified in some way + +MM: Okay. Let's take this offline or to the issue tracker. And I'll yield. + +JSL: Okay. I mean, I think it's basically what in terms of what we're talking about, principle one, receiving a signal, how we receive the signal. I mean, obviously, just as a general rule, you've got to be able to subscribe to it whether it's when you get sync, async, whatever, right? I don't think this one's controversial. I think this one might be. This kind of gets into the root of what we were just talking about. Again, I'm just going through this briefly. If you have an objection, please jump in. Or if one of these just doesn't fit with your mental model, jump in. That's why I'm going through this. This one's here. Notifications. This, in terms of most of these, are based on the current model with AbortSignal. Because this is just how it works. + +JSL: Okay. All right. Principle 3 is for state checks. Again, this kind of gets into what the conversation is just happening. Based on the way AbortSignal currently works, right, you can check, is this thing aborted? Some operations can't be interrupted, right? We have to do it either before or after. Did an abort occur while I was busy? And that kind of thing. + +JSL: Principle 5. Reason travels with the signal. It's not just stop, stop, and here's why. One way, one time, and irreversible. There's no unabort. We're receiving, not sending. The idea here with the protocol is that this is just kind of a one-way signal, right? We can unsubscribe anytime we want. We know we are no longer interested. + +JSL: The multi-consumer one, again, this is just kind of an artifact of how AbortSignal currently works. Fits within that model. And you have one signal anybody can subscribe to it. This is one, I think, that if we are going to revisit the design, how might this actually look? And is this actually a concern? The multiple sources, this is the `AbortSignal.any` case. Where you might have a timeout or you might have a user triggered signal. + +JSL: Never aborts one is one we can discuss. This one isn't quite covered by AbortSignal right now other than the fact that you could just choose never to actually trigger it. Right? Is this a principle that we actually need to have? Need to capture, need to design around? Let's see. We have clarifying question + +JHD: combinators. Yeah. So that's, I think, principle 10? You have a phrase at the bottom, only the any combinator has demonstrated practical need. So given that we have practical need for the four for the six promise I don't know. However many promise combinators we have, I'd have to look it up real quick. But they all have practical need in although some of them are more edge casey than others. So I'm confused why there wouldn’t be a corresponding practical need for abort combinators. + +JSL: There might be. The ecosystem hasn't actually come up with any, right? And outside of `AbortSignal.any`, I've seen zero discussion anywhere about any other combinators. + +JSL: So but again, bringing this up specifically for having this kind of conversation. Point of order, four minutes remain. Thank you, Chris. New topic, Ron. Never abort. + +RBN: You asked whether that's a useful capability. It's useful for memory management scenarios. The biggest value comes when you write an application that makes use of a large cancellation graph if linked signals, such as with `AbortSignal.any()`. The problem, though, is when you have a large single-page application that has a lot of moving parts in memory and they're all depending on a complex cancellation graph. This can result in a lot of callbacks that are registered and holding onto a lot of memory. Once you've passed a point of no return and no longer care about cancellation we don't need to hold onto them anymore. We don't don't need to hold onto large portions memory due to closures if we're never going to call them, so why are we holding onto this data? These types of closures have been historically an issue in the web and it's one of the reasons that it's so hard to actually use `AbortSignal`/`AbortController` in pure JavaScript code for anything other than `fetch`. It's also hard to use own its own because you have to call `addEventListener` with a registration that says this only happens once. If you don't, then the callback is going to remain registered and stay around even if the signal is just dropped on the floor. It's better to be able to explicitly say, "I'm never going to cancel this so you can drop all the registrations". Also, if you know the signal will never be canceled you don't even need to register in the first place and can optimize for that case. + +JSL: We are out of time. I do want to just finish the two items on the queue. Clarifying question from KM. Does that close the door for other combinators? No. We just need documented use cases for them. And then I don't think we have time, though, Steven had a new topic “Any combinator was required because efficient non-leaky implementation was impossible in user land EOM”. So I think that closes out. The principles are in a repo. It's in the link in the agenda. If anyone has any further discussion around those, we can do so there. But then eventually after this, probably merge those principles into the existing discussion. Thank you. + +### Speaker's Summary of Key Points + +* Conversation is essentially rebooting cancellation/abort protocol discussion… is there still consensus this should be in the language +* Review of design principles +* Do we need a new proposal or continuation of prior work? + +### Conclusion + +* Yes, there's general agreement it should be part of the language +* There are still discussion points about the design (synchronous callbacks, combinators, AbortSignal alignment, etc) that need to be worked through +* JSL will join the champions of the original effort to help reboot and get it moving + +## Explicit Resource Management Conditional Stage 4 Status Update + +Presenter: Ron Buckton (RBN) + +* [proposal](https://github.com/tc39/proposal-explicit-resource-management) +* [slides](https://1drv.ms/p/c/934f1675ed4c1638/IQCRVZZRvm3sSp9rK19F2SMJAQpZaM-CTuIElCRvQXim9Fg?e=lUcddY) + +RBN: Hello, everyone. I'm Ron Buckton, currently with F5 now, and I'll be talking about the Explicit Resource Management proposal. If you recall, I believe it was end of last year, we discussed a conditional Stage 4 advancement for Resource Management. The only two things that were outstanding at the time were final editorial review on the specification text and the review and adoption of the few outstanding PRs related to Test262 tests. + +RBN: This proposal includes the `using` and `await using` declarations, which are lexical declarations similar to `const`. They support RAII, which is "Resource Acquisition Is Initialization", the idea is that when you initialize the resource, we acquire their state and control their memory management. We do this through the use of a built-in symbol-named method on the object. This is either `Symbol.dispose` for a resource with synchronous cleanup, or `Symbol.asyncDispose` for a resource which requires asynchronous cleanup. + +RBN: We also introduce two container classes to the standard library: `DisposableStack`, which holds resources that are synchronously disposable, and `AsyncDisposableStack`, which holds resources that are asynchronously disposable. The primary purpose of this is to support aggregation of resources, such as when creating a class instance that may allocate multiple resources that should all be cleaned up when that class itself is cleaned up. These containers also provide interop mechanisms for operations that have not adopted the `using`/`await using` resource management capability. + +RBN: In addition, we also introduced `SuppressedError`, which is a way for JavaScript to wrap both an exception that came from disposal as well as any exception that would otherwise be suppressed that may have come from the original body of the code, preserving access to both exceptions. + +RBN: As far as our progress towards Stage 4, we are just about finished with the review of the Test262 tests. All the tests are written. We have nearly, I think, 300 different tests, probably more if you look at some of the individual operations within the test files. These cover `using`, `await using`, `SuppressedError`, `DisposableStack`, and `AsyncDisposableStack`. The only outstanding tests that are still pending review are for the `AsyncDisposableStack.prototype.use` method. There's only about 20 tests to review for that method, many of which are very similar to the ones that are in `DisposableStack.prototype.use`, so I'm hoping that as KG was saying, we may be able to wrap that up this week. Finally, the ECMA-262 PR was approved by KG. Although he's now stepped down as editor, I'm waiting for final review from MF and SYG to finally be able to advance this. I was hoping we'd be able to get there by this meeting, but if not, we're fairly close. + +RBN: I do want to mention a couple editorial changes. These should not affect any of the runtime behavior as we just did some cleanup. We added a `ContainsUsing` SDO to replace the hand-wavy Statics Semantics checks we were doing for `using` and `await using`. We had a Static Semantics check on `using` or `await using` declarations that would check that they were contained within a `case` or `default clause` without having it be in a block, as well as a check that they weren't used at the top level of a Script. Instead, we've moved those two checks to their respective containing elements, so there now checked in tje Script, `case`, and `default` clauses using the `ContainsUsing` SDO so that we have a more reliable and less hand-wavy way of checking these semantics. We also updated the `AsyncDisposableStack.prototype.disposeAsync` method to be a proper built-in async method following the adoption of async built-in methods and we've replaced where we were passing a `hint` into the `InitializedBinding` AO that would eventually evaluate the `AddDisposableResource` AO to do resource tracking. There were a lot of places we had to thread `hint` through to make that work and most of the call sites don't actually pass in the `sync-dispose` or `async-dispose` indicators, so we instead removed that and moved that to the few places where we actually do need to track the resource being disposed. Thank you, KG, for putting that PR together. Again, none of these things should affect the runtime semantics. + +RBN: I have links here to the explainer, the rendered ECMA script PR, the ECMA 262 PR, which again is just pending some final review. It's been updated and should hopefully be ready to go fairly soon. For the Test262 PRs, there was one very large one that was split up into about 12 smaller chunked PRs and we're waiting on just the last few tests for one method to be reviewed. Currently, this is conditional Stage 4, so we're hoping that we can get this wrapped pretty soon. + +MM: Suppressed error does not take an options bag. Now, this being already a conditional stage four, I'm certainly not suggesting that it be added in this proposal. But I'd like to see a very quickly following on proposal that hopefully can advance rapidly that adds an options bag to suppressed error. Would that break anything? Is there any reason why there was no options bag on suppressed error? + +JHD: Every error takes one for cause. At the very least. + +RBN: I don't believe `SuppressedError` does. This was actually a discussion that we had several years ago that we didn't include `cause` for `SuppressedError` because `SuppressedError` has a already has something that indicates the thing that it's nested. We felt that the intuition around `cause` was at odds with how the `suppressed` property indicates what the error suppression is. Since we're already tracking this in a more specific property, we didn't want that conflict in intuition, which is why we didn't include `cause`. Now, if we do include an options bag for things like stack traces as was discussed in the last plenary, then we would probably add it. + +MM: Okay. Thank you. And I support. + +DLM: I think we should be more cautious about conditional advancement in the future. In this case, I think it's been 10 months. I believe immutable array buffer are also advanced conditionally to stage three. Last May, and it still has not actually made it to stage three. This isn't meant to be a critique of anyone involved. I just think as a committee, we should be more cautious about these conditional advancements. If we don't think the conditions can be met, what is in a short period of time, like a week or so, and I'm personally going to be more cautious about supporting advancement like this in the future, particularly for stage three and four because as an implementer, this makes my life slightly more difficult in that when something reaches stage three, I can ship our implementation if we have it. And when something reaches stage four, then that means I can remove any kind of flags or conditional shipping stuff. So yes, I hope we can be more cautious of this in the future. + +RBN: I would like to call out that we need more help in Test262. One of the main blockers for advancement and getting out of the conditional status for this has been the limited time and availability of reviewers on test262. I think test262 definitely needs more more people involved in review to help some of these things advance. + +CDA: Thanks, Ron. There's a plus one to that comment from Dan from OFM. I also think it'd be hard to disagree with if conditional advancement can't be resolved before the following plenary, there's not really a lot of value in conditionally advancing things at that point. So JSL says, "The error code proposal would add an options bag to suppressed error." And that is it for the queue. + +RBN: Hopefully, we'll hear back from the folks on test262 on getting that last PR merged and that the editors will be able to finish up their reviews in the near future. If there's a chance that any of this happens while we're at plenary, I'll let folks know. In the meantime, I have nothing further on this. + +### Speaker's Summary of Key Points + +* A few editorial changes, no semantics changes +* Almost done, still waiting remaining on test262 and editorial review + +### Conclusion + +* Test262 is in need of more reviewers + +## Introducing: Intl Era/Month Code for Stage 4 + +Presenter: Ben Allen (BAN) + +* [proposal](https://github.com/tc39/proposal-intl-era-monthcode) +* [slides](https://notes.igalia.com/p/era-monthcode-stage-4#/) + +BAN: Oh, fantastic. Okay. So I am presenting for Intl Era Month Code for stage four. It has been a long but ultimately fruitful process. Okay. So to recap, the premise of this is that temporal supports a number of non-ISO 8601 calendars. The behavior of these has historically been defined outside of ECMAScript. In practice, CLDR and ICU for C3X. Temporal adds calendar arithmetic capability and although we don't want the local ECMAScript to specify the arithmetic for every calendar, we do want some guardrails among implementations to prevent divergence. Okay. So our addition of support to our descriptions of the supported calendars that remove potential ambiguities. And this is actually a closed set of calendars now. It's not possible without normative changes to add additional calendars. We have a list of the valid error codes in aliases as standardized by CLDR. We include the valid ranges to the errors error years in the calendar. We have an expanded range of selection for reference years. This comes into play most obviously with the instations in a solar calendars. Chinese and Dengue. And we have the list of epoch years per calendar. Okay. And we have specifics for what calendars support era. So this is stuff that's now been defined at the level of ECMAScript. The additionally added constraint behavior when adding years in a solar calendars, which can be complicated because these calendars often have days and often leave months. We've defined an algorithm for determining the difference between two dates. And we have special handling for the solar calendars. With an implementation-defined algorithm. + +BAN: So what's new since January? We've done a fair amount of cleanup and fixing spec bugs, but no normative changes. Essentially, I'm not going to walk through these in too much detail. But 118 involved situations where the reference year wasn't correctly determined in cases where we were dealing with a plain month-day. But also constructing it using object that has a year. Previously, those spec bugs we know put on a search to fix that. We clarified that probably days in the month. These calendar ISO date slots. We've clarified the proper authority for the Persian calendar. We've given additional information once for calendars that are not maintained by a single organization. So if it is maintained by a single organization, we close that. We give informational notes below for there is no single authority for that calendar. And 122, as it says, it fixes spec bug resulting in a solution violations when used to provide a date of a month later than the number of months for that year in that calendar. And with all of these, the implementation is already doing the right thing. + +BAN: All of our blocking issues have been closed. All of the remaining issues on the repo are ones that are nice to have for the future. But we have no blocking issues. And we've been stable since the last plenary. So just to step through the stage entrance criteria, we do have two compatible implementations which pass the test262 acceptance tests, both the ECMASpiderMonkey we've had significant in-the-field experience with sticking implementations. Actually, I should. So instead of doing the bullet points, I'll actually do the slides with the text on them. So yeah, both SpiderMonkey and V8 have 100% or about 100% test conformance as of March 10th. Both of them have been shipping temporal and the associated Intl Era Month Code features for several months now. We have a PR up for it. The PR is incorporated into the stage four PR for Temporal so they will be hopefully merged together. Editor sign-off. I've been keeping it. I'm one of the editors. I sign off on it. I don't know what that means. Ujjwal has also signed off on it. And RGN has said there’s no blocking issues. It was very some small concerns that we intend to address. So that has been very, very quick. I'd love to enter any questions, but barring questions, we are going to be asking from stage four. + +DLM: Yeah. We support stage four and express our happiness to see this advance. + +BAN: I suppose I should formally ask for consensus for stage four instead of saying, "If condition X has been met, we ask for consensus for stage four." + +RGN: Yeah. As mentioned, the issues in the spec are minor. They'll be resolved very quickly. I support stage four. + +SFC: I was wondering if how we're doing with browser conformance at this point. I think that I know that in Chrome, we've been working on a lot of issues in getting it into conformance on that. I know that, for example, with the Islamic calendars, it would be harmful to not be in conformance and there's other cases where the conformance issues are edge cases. I was just wondering where we're at in terms of engines and doing what the spec tells them to do here. I'm asking only because it has been a little bit we've found a lot of issues with the Chrome implementation that we've now mostly fixed. And I'm wondering, where we're at with the other engines, where they're at in that process. + +As a reply from Philip? + +PFC: I need to re-measure this today because these results are as of a couple of weeks ago, and there have been additional tests merged since then. But as of a couple of weeks ago, SpiderMonkey's conformance with the test suite was at 99.9% and V8's conformance was at 99.7%. + +BAN: And it's my understanding that most of the recent tests have been related to the East Asian calendars. They haven't had any additional tests added for the Islamic calendars in terms of the tests for those. + +CDA: Oh, Ben, can you repeat the last thing that you said? + +BAN: Oh, yeah. I believe correct me if I'm wrong. In terms of new tests that have been added since the last plenary, they've been for east asian calendars, rather than for the Islamic calendars. And Philip might be able to know better than me about the implementations of passing the Islamic calendar tests. + +CDA: It's still really difficult to hear you. + +BAN: Yes. PFC might have better details on this than me, but most of the recent tests have been related to the East Asian lunisolar calendars and we haven't had to add the implementations have been conformant for the Islamic calendars for a while. But all the peripheral calendars are mistaken then. + +CDA: Okay. SFC:. “No need to speak”. Shane says, "Cool. Plus one on stage four. Nice work, Ben et al." + +BAN: Oh, my goodness. Thank you all so much. + +CDA: All right. We had explicit support for stage four. Congratulations. + +BAN: This is the culmination of many months of my life and many, many years of the lives of everyone working on Temporal. So I'm very excited. + +CDA: Excellent. Thank you so much. + +### Speaker's Summary of Key Points + +* Test coverage complete +* Two implementations exist +* Final editorial changes complete + +### Conclusion + +* Stage 4 achieved! + +## Meta: Proposals repo now lists all times a proposal was presented + +Presenter: Jordan Harband (JHD) + +JHD: Within the last couple of months, I pushed an update to the proposals repo that attempts to add an exhaustive list of meeting notes references to every proposal. This was done in a severely automated fashion. And it's almost certain it has already had some items patched in to fill the gaps. Please, for any proposals you have ever championed or are championing, it would be super awesome if you took a look and double-checked that all of the meeting notes references for a proposal are included you don't even necessarily need to do the work to find them. Just file an issue or DM me or something and say, "Hey, this proposal's missing some, I think." And I'll figure it out. But that would be great. And unfortunately, `
` elements don't render in GitHub for Markdown tables. So it has to be a big long list. I'm also open to suggestions for how to make that UI look a little nicer. But for now, this is sufficient. Just for archival purposes. + +## Error Stack Accessor for Stage 2.7 + +Presenter: Jordan Harband (JHD) + +* [proposal](https://github.com/tc39/proposal-error-stack-accessor/) +* [issue](https://github.com/tc39/proposal-error-stack-accessor/issues/9) +* Issue presented instead of slides + +JHD: Okay. So this issue is my path to stage four thing where I talk about all the different requirements as a proposal advances. So the error stack accessor proposal, if you may as you may recall, thank you for thank you, JRL. Is basically trying to standardize the pre-existing browser behavior for `Error.prototype.stack`. Just the accessor itself. There is a separate proposal for error stacks themselves, which is to talk about the structure of the stacks. And there is a potential future proposal that somebody could write to specify the contents if they wanted. But none currently exists because that's a bit of ocean boiling. But iterative progress is still good. + +JHD: So JRL, if you'd scroll down a little bit so stage 2.7 is shown. Editors and reviewers have signed off on the spec. Various issues brought up have been addressed. And in theory, all browsers are already compatible with this. Except perhaps V8, who's expressed interest in moving to the accessor in the past. But we can confirm that. The only thing that's not checked there is the HTML integration PRs. I made three. I just pushed them up this morning. So if somebody wants to block advancement because those haven't been approved, that's perfectly fine. Alternatively, conditional advancement would be great. Either way, the diffs on all three of those are I mean, the test one is just adding tests. So if they're incorrect, they just need to be fixed. And then the other two, it's pretty straightforward. It's just trying to patch into the structured clone places and things like that. The reliance on error data. And the stack information. So the in other words, the intended semantics within HTML are already set. It's just the exact way of wording them has not been reviewed and approved by the various editors in WhatWG. So my hope is that today we can get fuller conditional advancement to stage 2.7. So that this one more corner of the language becomes slightly more specified. And I don't see anyone on my. + +CZW: Mentioning the V8 stack behavior in Node.js, I think it's linked in the original stack proposal about the compatibility. That there was a history about V8 changes and Node.js is also affected. That right now, the latest version of Node.js also, which is because of a V8 change, the stack is an own property accessor on the error instance. It's definitely not the behavior that this proposal is describing, but yeah, I think it's currently not an own-property accessor. + +JHD: Yeah. So thank you for confirming and calling that out. So my understanding of the history here is that it was made an own property not for any philosophical reason, but as part of an additional change perhaps for performance. OFR, as is next on the queue, but hopefully V8 is still willing to move to this accessor model. All common usage of .stack on errors should work identically within V8 with this change. And this proposal is compatible with the capture stack trace proposal and yeah, and I think prepare stack trace in theory could work with it as well. I haven't looked into that too hard because there's no proposal for it. But it's certainly if there's any foreseen breakage, we can work on that before advancing this to 2.7. But my hope is that that's already been exhaustively looked at. So I'll defer to OFR. + +OFR: Yeah. Just to clarify, so since you said that this is basically speccing what V8 is doing already, that's not true. So it's not speccing what V8 is doing. We do have an accessor, but the accessor works differently than the one that is described in the spec. And specifically, the way it differs is that our accessor allows you to write to the internal slot, whereas the one that is specced here does not write to the internal slot. But instead, it adds an additional own property that shadows the internal slot. We are okay. We're trying this. There is a non-zero risk in that because it could theoretically lead to more memory leaks because we're not able to free the internal slot when you invoke the accessor to overwrite the slot. So we're okay with going forward. + +JHD: Awesome. + +OFR: But there is a risk in not speccing what we're currently doing. + +JHD: Cool. But thank you. That's noted. I think my instinct there is that when I mean, you can free it when the error object is collected and anyway, right? And the error paths usually aren't too performance sensitive. But obviously, it will have to be played out to see. + +OFR: So the risk is not about the performance. The risk is about basically the internal slot keeping alive a basically a reference to the stack. And through that, keeping basically keeping contexts reachable, which are otherwise not reachable anymore. Since we internally do not store the stringified version, but I mean, it's implemented. + +JHD: Got it. + +OFR: I'm just saying there's a non it's a visible change from or it's. + +JHD: Cool. Thank you. Yeah. + +OFR: It's going right now. + +JHD: If the setter as specced ends up being incompatible with V8, then obviously some changes will need to be made there. And hopefully, we can figure that out within 2.7. + +MM: Yeah. So just Olivier, thank you for all that. And I want to just make sure I understand one of the implications. Is that independent of the setter behavior issue that you just raised, are you saying that V8 has no problem or is willing to move the accessor up to `Error.prototype` and not have the own properties? Own property accessor. + +OFR: Right. Yeah. Actually, good call. I think there's even a second change. Yeah. We would have to move it to the prototype. I'm is it an own I'm not 100% sure, actually, now. But yeah, anyway, we're the point is that we're going to try to implement it as specced here. But it's a visible change from what we're doing right now. Yes. + +MM: Okay. So just one thing about the current implementation. It isn’t an own property. But all of the own properties have the same getter function and the same setter function. So that's a reason to believe that the first step, which is just moving them up to the prototype, is probably not something that's going to break anything. There was a counter there was a potential problem that was raised by somebody on V8, I think, which is that the capture stack trace in V8, not as proposed, does also add the internal slot and accessors to non-error objects. Does that remain a worry? + +OFR: I think that's orthogonal to this proposal. + +JHD: Yeah. The capture stack trace could choose to set the internal slot or not. And it would simply not have that option on non-error objects. + +MM: Okay. Great. Thank you. I'm done. + +KM: Just to clarify one thing. I think maybe I misheard you, but I think you said that all engines are using an accessor today. And that is not the case. I think on JSC, it is still a known data property on here. + +MM: No. I was making the case specifically about V8. + +KM: Sorry. I'm in the earlier part of the presentation, not your question, Mark. + +MM: Okay. + +KM: The not that that necessarily impacts the proposal. I don't necessarily think that's an issue. I just want to make sure that we're clear for. + +JHD: Yeah. So thank you for clarifying that. JSC has an own data property. And no accessor at all on the prototype? + +Yes. I'm sorry. I'm sorry. + +JHD: Okay. But JSC is willing to make this change? + +KM: Yeah. We're willing to try it. I mean, again. + +JHD: Awesome. + +KM: Obviously, compatibility risks. + +JHD: Of course. + +DLM: Oh, sorry. I think you were just about to ask for stage 2.7. And I am support stage 2.7. I just asked that you get the HTML integration work done before stage 3. + +JHD: Absolutely. Thank you. + +CDA: And now, that's it for the queue. + +JHD: Okay. And Dan, was that support for full stage 2.7? No need for conditional? + +DLM: Yes. Following on to my comments earlier, I prefer to avoid conditional advancement. I don't think it's necessary. That doesn't fit this proposal. + +JHD: All right. Thank you. + +CDA: MM, supports RGN plus one for stage 2.7. I presume that the one spec editor sign-off is sufficient. + +JHD: Typically, it has been. I put them all on there just in case. But yeah. Obviously, if an editorial issue is discovered, it will be tweaked. + +CDA: Yep. All right. Seeing nothing else on the queue. No objections. You have 2.7. + +JHD: Thank you, everybody. + +### Speaker's Summary of Key Points + +* HTML integration PRs up; will be approved before asking for stage 3 +* Browsers are willing to make changes to align with the proposal, modulo web compatibility issues that arise + +### Conclusion + +* Advances to stage 2.7 + +## TypedArray concat for Stage 2 + +Presenter: James M Snell (JSL) + +* [proposal](https://github.com/tc39/proposal-typedarray-concat) +* [slides](https://github.com/tc39/proposal-typedarray-concat/blob/tc39-plenary-march-2026/slides-export.pdf) + +JSL: Okay. So just the recap. Concat was we put it got to stage one. In Tokyo, I'm asked a little later. Okay. There we go. I'm asking for stage two. Today. Just quick update on where we are at. Just a recap. Problem. We have set now. But it does require you to basically reallocate everything. Every time you call it or just kind of walk through. It works okay. Performance benchmarks. On the various implementations are okay. Ideally, though, we would have a kind of a concat operation that allows that can be optimized a little bit more. Than just having this iterative set. But so we have something now. Ideally, we would have something a little more optimizable. Solution. Right now, again, this is current state. I'm asking for stage two. There is a draft spec text in the repo right now with a pull request that kind of refines it a little bit. Asking for a typed array. This should actually say type no, this typed array concat. It's a static. I pass in an array of items. Give it a length. The length is optional. If the length is less than the total of all the items, then you're truncating, if it's greater than you're going to concat and then zero fill the rest. You have an array buffer concat. Same thing. Shared array buffer. So this is the current surface. One of the open items is whether concat is the right name. We'll get to that in a bit. This is all in the draft spec language already. + +JSL: For typed array concat, the items are either the same kind of typed array or an iterable that fits the shape. So here, this example is showing to you a data arrays. But with the current proposal, you could pass in any iterable of those things. And you'd get back a UNC data array. Okay. Array buffer. I mean, it's straightforward. You concat these. You get back an array buffer. There is an options bag. You can specify the result that's being resizable or immutable. Obviously, the immutable depends on that proposal continuing to make progress. And then a shared array. This one I'm less convinced is strictly necessary. It's just for completeness. But I haven't actually found a use case for it. So I could go either way on whether we keep this one or not. Again, the options bag here options are slightly different. Growable is there. Obviously, there's no immutable. + +JSL: Spec status. You could take a look at it. There is a note like I said, there is an [open PR](https://github.com/tc39/proposal-typedarray-concat/pull/9) that I just I think I just opened it yesterday. That kind of refines based on some of the discussion that has happened. Has a few new abstract operations that are there. Worth taking a look at. It's not trivial. There's a fair amount of spec text there. It is defined in kind of generic terms. Basically, in terms of set. Or the same type of operations as set. But the intention is for implementers to use any technique to optimize it as much as possible. So the spec text is basically just there for completeness. + +JSL: Design decisions in stage one. Static method rather than a prototype method. So we're using UNC data `array.concat`, not `prototype.concat`. Iterable argument rather than rest params. So we're going to pass in an array rather than individual arguments. This goes to what is probably going to be the most common use case. Which is collecting these things and then concating. You don't want to have to spread them out after you have already collected them. Same type enforcement. So if a typed array concat has to be the same type of typed array. And the item if you do pass in an iterable, those things will be converted to that type. Separate array buffer shared array buffer concat. Options bag. Yes. We're going to talk about that. Immutable option. Yes. Iterable fully yeah, this is actually one of the things we need to talk about. The iterable being fully consumed before validation. Like it accepts an iterable. Do we fully consume that? Or do we consume it while we're copying it? It has an impact on observability, error handling, that kind of thing. So that's one of the things we need to discuss. + +JSL: Open issues. Name bike shed. Concat has the unfortunate quality that it conflicts with some of the the shape of the method is different than concat that we have in, I think, in string. It conflicts with Node’s use of concat. Buffer has a concat operation that has a different argument shape. So we probably don't want to go with concat even though that is the most obvious name for this. So basically, open for suggestions on other names. fromChunks was one. I do want to note DLM on the queue for objecting to problem current problem statement. I'll get to that in just a moment. Let me just get there. So we have fromChunks join. There's things here that we can discuss here. + +JSL: I just want to briefly go through the issues. And then we'll go back. The iterable consumption ordering. I think this is probably the biggest technical one that we need to get through and understand has to do with the what's observable when. As we're going through this. Accept iterables as items. I think I don't think that this one is controversial. But I want to make sure we discuss it. Immutable array buffer integration. I don't think again, I don't think this one is controversial either. But since immutable array buffer is still pending, and not finalized, we just want to make sure we consider it. And then other there's a few other minor discussions about what the backing buffer looks like for partial view, typed arrays. There's some of these things that again, I don't think these particular ones are controversial. But we can discuss. Okay. So let's go to the queue. Just to clear this out. DLM, you're up first. + +DLM: Thank you. So yes. So we have some deep concerns about the current problem statements. So I'll just read it quickly. Ecmascript should provide native methods for concat needing typed arrays and array buffers to enable implementations to optimize your strategies that can avoid the current requirement. I've eagerly allocating and copying data to new buffers. From our point of view, the only realistic way to implement this would be to convert typed arrays and array buffers into real-life data structures. And we have a fundamental opposition to doing this. We think that they should remain linear. We think that strings are a bit of a can of worms in terms of optimization. And we don't want to open that can of worms for typed arrays or array buffers. So yeah. So we object to the current problem statement because we don't see any other way of optimizing this other than ropes, which we are opposed to. And this is a blocking concern. To be clear, I have no problems with adding concat as a developer ergonomics kind of thing. But that's not what the problem statement currently states. + +JSL: When the initial presentation in Tokyo, this was raised, then at that time, I'm no longer thinking of it that way. And I'm not thinking that the ropes or anything or anything like that is on a table. At this point. So I think it's maybe just a the only to-do here is maybe a refinement of the problem statement. But I don't think we're heading that way. + +DLM: So I won't support stage two. Actually, I'll block stage two with the current problem statement. So I'm quite open to you refining that problem statement. But it should be done before this goes to stage two. + +JSL: Got it. + +KM: I guess I yeah. Reiterating the same point there. I don't think ropes would ever really happen for typed arrays. It would be really difficult to do because they're immutable. Like strings aren't even immutable, which makes that a lot easier. And then yeah, that has all the same kind of weird performance pitfalls that you kind of don't expect when you're dealing with something as low level as like a typed array. I would very much expect all of my operations to be O(1) rather than O(log N). To access a random bit in my typed array. That would be quite surprising from a development standpoint. Things like that. + +NRO: Yeah. I mean, about the naming. It is called `Iterator.concat`. We have a precedent for static method, which is the iterator .concat. It has a different signature because it takes multiple arguments one per iterator concatenating rather than an array of iterators. So it also conflicts there. I still think it would be fine to call this concat. It's a different enough domain. But yeah, take that into account. + +JSL: Okay. All right. So I want to get back to the point on the problem statement. And just as a is a question. If there I can take another stab at a refined problem statement. So instead of asking for stage two right now, if I have that revised, if there's time, can I bring it back tomorrow or the next day? To kind of revisit. + +CDA: Yeah. If there's time. + +MF: Yeah. I mean, just on the naming bike shed. To NRO’s point, I think that because of that precedent with `iterator.concat`, and other methods that have the word "from" in them, "from" is implying that we're pulling things from a container of some sort. So I think concatFrom would be a good name choice here. But I don't want to spend time on bike-shedding names. + +JSL: Yeah. We can bike shed names. Either in the repo or in the discussion. Just if you have some ideas. Let's get some options. And I'll put together a poll or something in see what folks prefer. + +JSL: The iterable consumption ordering. This is the one that I think is the larger issue that I think we need to kind of settle. I think JHD was this one that you commented on in the issue? Yeah. + +JHD: So if I'm remembering correctly, it was essentially that I think the way you've specified it is that it iterates each thing and if there's an error, it stops in the middle. And the way we typically do built-in methods is we consume all of the arguments first and store them in spec internal data structures. And then we process them separately. So that the validation's all done. In other words, the that also includes the user code. Like the hooks into observable. Observable hooks into user code. Would all be completed by the time validation is done, including closing iterators and everything. And then in theory, there might not be any user code or observability for the processing step. Which allows for determinism optimizations, etc. + +JSL: I'm looking at this here. KG, you also had some ideas on the iterable consuming the iterable. Do you have any thoughts on this right now? Don't mean to put you on the spot, but. + +CDA: KG, if you're there, you're on mute. I do know that KG is on childcare duty. So. + +JSL: Okay. Yeah. KG’s comment I'm going to read it here. Iteration protocol calls user code, which could detach or resize a typed array or from earlier in the iterable. Reading the whole iterable and only then doing validated typed array on the value it produces ensures there's no user code running between the point at which you start computing the total length and the point at which you do the copies. Which might be faster or less bug-prone for implementations. So here, I really don't have many very strong options or very strong opinions on it. Whatever, I'm happy to go with whatever the committee thinks is the best approach on this or kind of the most idiomatic approach. What I am concerned about, though, is observability of the process, right? I want to have a point at which copy is not observable. So and by that point, all the validation should be done. So what state does the iterable need to be in by that point? All right. That's kind of where I'm at. + +JSL: So let me see. Anything else on the queue? Nothing else on the queue. So what it sounds like right now, we're blocked for stage two until refining the problem statement. Give me a day or two. To try to refine that based on the feedback. And what I would ask is if there's time on the final day, I'll bring it back with an ask if we have a good problem statement by then. + +CDA: Thank you. Okay. We're doing good on time. + +### Speaker's Summary of Key Points + +* We have draft spec text, addressed a number of discussion points already +* We need to figure out the naming +* We need to figure out the iterable consumption timing issues +* Asking for stage 2 + +### Conclusion + +* Objections on the current wording of the problem statement +* Will attempt to refine the problem statement and come back on Day 3 for the Stage 2 ask + +## RegExp Buffer Boundaries for Stage 2.7 + +Presenter: Ron Buckton (RBN) + +* [proposal](https://github.com/tc39/proposal-regexp-buffer-boundaries) +* [slides](https://1drv.ms/p/c/934f1675ed4c1638/IQCTgpGYlj_dTblwh8sK7M9HAQRbd5eUz4b_XF4t1rxe_38?e=TDhJ9C) + +RBN: This is Regex Buffer Boundaries, potentially for stage 2.7, although I've heard there's some potential that might not happen this plenary session. The purpose of the proposal is to provide a concise syntax to match the start or end of the input string to a regular expression regardless as to whether you're in multiline mode. This provides the ability to match the match the start of input, the start of line, the end of input, and the end of the line in one regular expression and mix and match between them. The other purpose is to improve portability and reuse of common regular expressions across languages, such as in TextMate grammars, editors, and build tools which commonly include them in JSON and YAML files. + +RBN: This proposal is similar to the caret and dollar anchors, which match the beginning or end of the input outside of multiline mode, or the begin or end of a line when inside multiline-mode. The idea is you have a slash A (`\A`) zero width assertion that only ever matches the start of the input and a slash lowercase Z (`\z`) zero width assertion that always matches the end of the input. It's important to note that this assertion requires some type of Unicode mode, so either the `u` or `v` flag are necessary. This is because JavaScript's default mode for regular expressions treats these as literal escapes in Annex B. However, in any Unicode mode these escapes are currently treated as errors because Unicode mode explicitly restricts escapes to only a set of allowed escapes. Slash A/slash Z would not be supported in character classes. + +RBN: There is prior art for this. It is in, if not all engines, at least every regular expression engines that I have checked (at least 8, at this count), except for JavaScript. I have a GitHub repo and a website that has a list of comparisons of various regular expression features across numerous languages and runtimes. Now that we have advanced the RegExp Modifiers proposal the semantics of this proposal is already built in to the language. So now the main purpose of this proposal is to introduce syntactic sugar over modifiers to match the same syntax used by other JavaScript engines for the same capability. The best reason for this is to ensure that these regular expressions are portable across languages when they're used in these offline storage formats like JSON and YAML files. This also improves the ability to leverage regular expressions in documentation from other languages and examples online, as well as code produced by LLMs. So there's a greater likelihood that you'll be able to reuse a regular expression you might see in something like Perl, or PCRE or any of the other engines. + +RBN: The spec text for these is dead simple and literally fits on a single slide aside from the syntax to support `\A` and `\z` in the regular expression grammar and in Annex B. Here are some examples of these with and without support for buffer boundaries, using just dollar and caret. You can see here in first example where it matches the entire string, yet it fails to match in the second case because it's not the entire string does not fall within those boundaries. For the second example, you can see that it does match in the second case because it's looking at the beginning and end of a line. Rather than the entire input. Again, this is achievable with modifiers, as you can see here, but it's a somewhat esoteric set of symbols to make this work. With buffer boundaries, it's simply just using the slash capital A (`\A`) and slash lowercase Z (`\z`) in those cases, regardless of whether you are in or out of multiline mode to guarantee to match the same. + +RBN: And showing that this is possible, you can also mix buffer boundaries with carat and dollar, so you can look for something that is at the beginning of the input, somewhere in the middle of the input on a line and at the end of the input and match these by mixing them. + +RBN: One note is that, at the time we advanced this proposal to Stage 2, we decided to drop the slash uppercase Z (`\Z`) syntax. The uppercase Z syntax was designed to match the end of an input, including a trailing new line, which is often the case in text documents and source code. This is still potentially useful as a lot of linters (for JS and otherwise, and often written in JS) will yell at you if you don't end your file with a new line. This is also supported by most major regular expression engines in addition to slash capital A and slash lowercase Z. I'm not proposing that we add this yet, but we may want to consider adding it as additional syntactic sugar since it has been used frequently in things like TextMate grammars that are looking for the end of the document, and which are frequently consumed by tools written in JS. + +RBN: The current status of this proposal. We have links to the explainer and specification text here. Our designated reviewers are RGN and WH. I haven't seen a review from WH yet. I did get a review from RGN, thank you. In addition, CDA was kind enough to also review and right now it's currently at stage two. I am seeking stage 2.7, potentially 3, although I know that's also a stretch. I do have a pull request up for test 262, which has tests for the these cases. I also have a PR with an implementation for engine 262. I need to make one change to add feature flag support, but otherwise, it's fully implemented. So that's where things currently stand. + +DLM: Yeah, sorry. Unfortunately, I have to block this based upon the late addition to the schedule. This likely is fine, but I would prefer to have the time to ask someone on my team that has more expertise in regular expressions to have a look as well. And there just was no time for that. So I'd encourage you to bring this back in. + +RBN: My hope here was that since this is essentially syntactic sugar, that there's should be little concern. I do look forward to that feedback. If there is a chance that we might hear back on that before the end of plenary, please let me know. If so, I'll bring this back at that time for that discussion. If not, I will be bringing this back in the next plenary session. These characters are already reserved in any Unicode mode that they would be supported in and ar otherwise not currently allowed, so this doesn't step on any other syntactic space. I will also state that there is no situation that I think that we would advance anything that uses slash capital A and slash lowercase Z as having any other meaning in JavaScript regular expressions as it would kind of buck the status quo in every other regular expression engine for no reasonable meaning that I could see. Especially when regular expressions are so commonly shared between tools. I also will note that I have had some feedback from the implementer of Oniguruma-es who has stated that they currently support this. This is a feature that's often used by a number of editors which use native bindings for Oniguruma for their grammar support, including VS Code, but the original Oniguruma is now end of life and is no longer being maintained. I'd like to bring more power to JavaScript so that these regular expressions can be not only portable, but actually consumable by JavaScript without having to go through heavy transformation or rewriting via something like Oniguruma-es. I hope that we'll be able to get this in the near future. + +OFR: Yeah, I would just like to double down on what DLM said. I think the deadline is already for adding things to the agenda is already a bit tight for us sometimes. And so I think we should not stretch it even more. And if somebody. That it was too late for them, then we should not put more pressure on them to hurry something. + +RBN: I appreciate that feedback. I added this to the agenda late per the recommendation of several others hoping that, given the size and simplicity of the proposal that there might be a chance that we could get this through. I'm perfectly fine bringing this back at the end of the plenary if those blocking have time to review it; otherwise, I will bring this back in May. + +WH: Looks good to me as a reviewer. + +CDA: Great. Thank you. That's it for the queue. + +RBN: Yeah, thank you. Since it doesn't seem like we'll be able to advance at this time, I appreciate the feedback. I will write up a quick summary after this. + +### Speaker's Summary of Key Points + +* Now proposes `\A` and `\z` as syntactic sugar over modifiers +* Spec approved by Richard Gibson, Waldemar Horwat, Chris de Almeida, Michael Ficarra +* Test262 tests written +* Engine262 implementation PR submitted +* Proposed for Stage 2.7 + +### Conclusion + +* Blocked from advancement due to insufficient time to review + +## Amount for Stage 2 + +Presenter: Ben Allen (BAN) + +* [slides](https://docs.google.com/presentation/d/1SAD0bgB_SOWrtz1YZuySC4YHdMxBs0vTnS7buQa311c/edit?slide=id.g3cb9b164150_1_50#slide=id.g3cb9b164150_1_50) +* [proposal](https://github.com/tc39/proposal-amount) + +BAN: Okay, let me share my business. All right. Looks like slides are visible. Okay. So we are going to be asking with an asterisk for stage two for Amount. I just want to thank EAO for all of his work on this, and also my colleague JMN, who has been doing most of the recent spec implementation. I've had to largely step out of this one for a month because I'm returning to it now. Okay. Next slide, please. All right. So this is different from how it was the last time. We presented Amount at plenary. Previously, Amount stored mathematical values. In the new version, they can actually store either numbers or bigints or numerical strings, ultimately tagged with the unit, which is what Amounts are about. Amounts indicate that a given numerical value is an amount of some certain thing. So some recent changes first of all, we received feedback that this proposal would be stronger if we put back in the unit conversion options that we took out two or three plenaries ago. So unit conversion has been re-included in the proposal. So we can convert the meters for centimeters, inches, and so forth. The units that we're supporting and the conversion data is coming from CLDR's unit.xml, which is treated as a new normative reference. When we do those conversions, we're going to be doing them with number operations, with the final result getting rounded with the same options and default as used by `Intl.number` format. I believe the default is minimum fraction digits zero and maximum three. We'll additionally extend support for conversions targeting a locale and usage, as in the old smart units proposal. Beyond what we're proposing for ECMA-262, which is conversion to an explicit unit. The API has been dramatically simplified. So the type of the input value is going to be retained as the thing I started to talk about. So if it's made using a bigint, it's going to be a bigint. Made using a number. If the input is a number, it's going to be stored as a number. If it's a decimal string, it's going to be stored as a decimal string. And we're going to have a new value accessor, which returns a number, bigint, or string depending on what was put by this input. And the next thing on here is going to be we're going to talk about it in the open questions. But the proposing if the value if the input is a string, what will be expressed through the value is going to be that value in exponential notation. The reason for that is to avoid ambiguities with significant digits. Just sort of like standard got your question about significant digits. Like, okay, how many significant digits are there in 100? So we're dropping the fraction digits and significant digits accessors. Since they duplicate information that we're able to get out by a value. Additionally, the `.with()` method is being dropped. And the options for toString are dropped. So the output always includes empty brackets if there is no unit if there's no thing that this is representing a number of. Or in brackets, what that thing is. So if it were kilograms in brackets, kg, so the output cannot be parsed as a number. And I want to, again, really thank JMN for all this work on this. He's done very, very extensive work to update the spec text to match the reading that we prepared. So now the value internal slot, which in previous versions were the mathematical value, is now storing a disjunction of number, bigint, or decimal string. + +BAN: And something key is that if precision options are set, so fraction digits or significant digits, the value slot is going to store a decimal digit string. Otherwise, retaining the original type. Additionally, because we're re-including unit conversion in this, there's a new convert to method. And which will be in ECMA-262. And in ECMA-402, locale and usage-based convert to. So for example, as in the old smart unit proposal, in a lot of locales, the unit the measuring system that you use changes depending on what thing you're measuring. So a person's height might be even in locales that use meters and centimeters. A person's height could be represented in feet and inches. Canada is a specific example for this sort of thing. So in 402, the option to provide a locale and a usage to use convert to to convert to something that makes sense for that usage in that locale. In spec terms, we previously had a lot of custom arithmetic operations. All of those have been removed. Instead, we're relying on 402's FormatNumericToString instead. Which means that if this proposal advances to stage 4, a lot of the algorithms of 402 would then get moved into 262. And presumably, possibly renamed. Something else about this it's going to be stacked on top of the keep-trailing-zeros proposal. The idea is essentially to simplify Amount as much as possible. + +BAN: And so here's the API. Something that will be returned to is, what is the best way for toString to work when what's stored in the amount is a decimal string. Again, we'll come back to this later on with any of this. + +JHD: Do you want a question now or at the end? + +BAN: Let's hold off for a second. But a lot of this comes down to one of the big open questions. This is right at the end of the question session. Let's see. So here's some example usages. This would convert to pounds. That would be part of 262. So this usage where the option unit is convert to pound. It's provided. We'll even in 262, convert from kilograms to pounds. What conversions are possible? Well, the data that we're using is in CLDRs. If that makes enough. And then here is the before of the usage. So that's stuff that 402 knows about. So if we have a weight in kilograms using if we have an Amount using kilogram as its unit, and we say, okay, we want this as in the en-US locale, as used to measure a person, well, then we'll get pounds out. + +BAN: Okay. So there's a number of open questions. The CLDR for the updates that we've made are well, one, we've put conversion back in. And two, we've put conversion back in. And also simplified the proposal a fair amount by not having our own definition in our spec defining certain arithmetic operations. + +BAN: So we have some big open questions with this. All right. So one of the smaller open questions is, okay, when we do unit conversion, we are going to apply rounding to the output using NumberFormat digit options. So those are our defaults ({ minimumFractionDigits: 0, maximumFractionDigits: 3 }). Would these be appropriate defaults? Something that I skipped that was pretty important. I'm going to flip back some slides. So when we do these conversions, when we do convert to, what when we do convert to the amount that comes back is going to have so the operations in convert are going to. + +BAN: So when we call convertTo, the amount that comes back well, first of all, the arithmetic operations are going to happen in number space. And what we get back is going to be a decimal string. Likewise, if we provided the precision options, so fraction digits and significant digits, then regardless of the input type, the new amount is going to be storing a decimal string in order to preserve that information. So first of all, okay, what are our default rounding options going to be? + +BAN: Unit conversion is only going to use the default locale when it uses the option to set and its locale is not set or if it's undefined. Is this going to be the appropriate behavior or should some other behavior be better? And okay. + +BAN: This is the sort of maybe the biggest open question. So it's going to be beneficial for purposes of interchange largely. If the string representation of the value of an amount is well-defined, and if it unambiguously retains its precision. Further, though, we can use other types and strings to represent values, which would be useful for non-finite values. So the idea is, okay, when we have an amount with a storing its value as a decimal string, let's use the same string representation that we're already producing from `Number.prototype.toExponential` for finite values and number for the non-finite values. So string if the value is stored as a string, we are going to toString or whatever is going to get out an exponential representation of it. This avoids ambiguity. This avoids losing information. About the number of significant digits. Again, it's the sort of the standard gotcha with significant digits of how many significant digits are there in 100? Well, that problem goes away if we use exponential notation. + +BAN: So I want to open for questions at this point. And for some of the questions, I will shamelessly defer to JMN because he's been the one who's been deepest in the spec and to EAO as well. + +JHD: Yeah. I was surprised when you said that when you specify a precision, the original value is stored as a string. It makes it very obvious why toString would give you a string with the precision applied. But if you go back to the slide that has all the types on it, the whole interface or whatever, here we go. So presumably, you'd want a way to extract so first of all, presumably, you'd want a way to extract what the precision was, right? I can't there's no way I don't see any way here for me to see what the options were. But regardless, I would expect the value to give me the original value and not apply the precision rules. That's what toString would do. Regardless of whether that expectation is shared by others or not, it seems important to me to have two different ways to have a way to get at the original value even if you provide precision. So what's the thinking around that? + +BAN: And this is where I shamelessly defer to JMN and EAO because several of the conversations involved in this were ones that I was not in. + +EAO: We discussed this actually in committee in TG1 previously: what to do about rounding. And we ended up getting instruction from this committee that rounding should happen on the way in so that rounding is applied to the value that is given to us, for what we retain internally. We have not identified any use case for an amount that does require rounding during construction that would then also require some access to the unrounded original value. + +JHD: So that's fair. I don't care about seeing the pre-rounded value, but I would expect type of value like dot value or whatever to be number or big in is what I'm getting at. So why couldn't it be a post-rounded number or big in? I still don't understand why it would need to be a string. + +EAO: We need to retain information about, for example, trailing zeros. So if we get an input of 0.1 and we are asked to create an amount with fraction digits 3, we need to be able to represent that as 0.100. And number and bigint don't support that sort of a representation. + +JHD: Of course. It does. The external interface isn't how that would be stored or fetched. That would all be an internal slot. So you can store whatever you want. I'm talking about the external interface. + +EAO: Which is the dot value accessor. + +JHD: Right. And I don't understand why that should be a string. When toString gives me the string with everything applied, I would still expect dot value to give me the original type with rounding applied, perhaps. + +EAO: But then the toString value is not supposed to be the thing you consume. But this is meant to also match the to be discussed later in this meeting intl unit protocol for how does something like `Intl.NumberFormat` read the input value that it is given here. And that does require access to something like a dot value that knows if there are any trailing zeros there. + +JHD: Okay. So that's fine. I'm not attached to the spelling value for what I'm describing, but I would like to be able to get the original value without precision applied somehow. And similarly, to be able to get to introspect as to what the options were. + +EAO: If you could file an issue in the proposal presenting a use case for this, it would be really great to be able to take that into consideration. + +SFC: Yeah. Just a little bit to add on here. When we do the when you specify significant number of significant digits or if you specify the precision, we need to convert it to we need to convert the number to a decimal-like representation in order to perform that operation because we do rounding in decimal space, not in binary space. So since the expensive thing is converting from binary to decimal, and when I say binary and decimal, I mean the space of numbers that are binary and the space of numbers that are decimal. So the current incarnation of the proposal says that we only do that conversion when we need to. And once we do, then we stay in the space that we needed to go to. So since applying a number of significant digits is an operation that happens in decimal space, then we convert to the decimal space and then perform the operation and then stay in the decimal space. + +JMN: I just wanted to add that the value accessor can return a string, but that's not necessarily the full thing. So toString is supposed to be the full thing. Like 1.5 kilograms. Whereas value will be 1.5. And I'm not sure if that's quite what you're saying. + +WH: I reviewed this proposal before the meeting. I spent several hours reviewing it, only to find out three hours before the meeting started that the spec that was posted was not what was being proposed. The posted spec was dated February 2026, but it's not this proposal. The actual spec was posted three hours before the meeting, which is not enough time to review it. So I don't think it's appropriate to ask for a stage advancement with a late spec. + +BAN: Yeah. Apologies for that. That's our bad. There was a CI problem basically. The spec rendered in the PR didn't render when we merged to master. Which is very, very unfortunate. And I absolutely understand that that's in and of itself a reason to not take this as a skip. + +WH: Okay. So over the last couple of hours I've been glancing at the new spec, and I have some observations which concern me. My main concern is standardizing something which produces incorrect results. Now, it's often okay if we standardize something which doesn’t do everything, but whatever it does, it should do correctly. And there are a couple of areas where this will give incorrect results. What I don't want to end up with is a case like we had way back at the beginning with \[Date\] `getYear` and `getFullYear` where the first thing added to the language was `getYear`, but that was producing incorrect results. So then we had to instill in everybody to use `getFullYear` instead of `getYear`, which we couldn’t change after it was already in the language. And the two places where this proposal gives incorrect answers that I can see are: + +WH: 1. When you give it 2 meters and ask for a person-height in the US, what do you get? You get 79 inches, which is not what you want. This is numerically fine, but I'd say that from a localization point of view, that's an incorrect result, with the expected result being something like 6’7”. And once we standardize it, it will be very hard to fix later. The only way to fix it will be to have users set up a flag saying, give me the actual correct thing rather than what we standardized earlier. + +WH: 2. The other area is the combination of unit conversion and rounding. I just did some quick math, and, the way it's currently implemented, if you give it 7 feet and ask it to convert to inches with a rounding mode that truncates towards zero, the answer you will get is 83, even though converting 7 feet to inches should be an exact conversion. But it will give you 83 instead of 84. I would be hesitant to standardize something which gets exact conversions wrong. + +BAN: Okay. And with the first issue you raised, that's I'm hearing that is basically having the locale-related conversion is dependent on having compound maps so that we have feet and inches in the case of US types. Is that correct? + +WH: Well, I am agnostic on how we do it, but the situation I don't want to get into is for us to standardize something and then tell users that to get the actual result that they want, they need to set another flag to say that they want the good answer, not the one that we standardized first. + +BAN: That's very useful feedback. Are you going to raise these as issues? + +WH: Okay. + +SFC: WH, you said something about 83 versus 84 where one is the correct answer and one is the answer you get. Can you elaborate a little bit on the case that causes that issue? + +WH: Yes. It's actually already explained in a note in the spec that was posted. It's triggered by multiple sets of rounding while doing the conversions. And this is not a problem limited to double arithmetic—the same thing would happen if we were using decimal arithmetic. The inherent problem is multiple rounding. So if you take 7, multiply it, convert it to meters by multiplying it by 0.3048, and then divide it by (0.3048 divided by 12), you get 83.99999999999999. And if a user sets the rounding mode to truncate, that will display as 83. + +SFC: So I think this is a I'm glad you raised this, WH. This is a good question that I but I think it would be good to get feedback from the committee because the version of this proposal from last summer had this type of arithmetic be done in mathematical value space, which means you wouldn't have these types of issues. We got feedback from committee that we don't want to start implicitly adding decimal arithmetic via unit conversion. So the current version of this proposal is we do the math in number space, and we have all the same problems as numbers have, like this one. So if this is the direction that committee wants us to go, then that is then we end up getting results kind of like this. So yeah, I agree with you in the sense that I think it would make more sense for users if they don't have number problems when they use amounts. But also, that's the thing that is already in the language and once we if we have a decimal type, the amount type will be able to contain a decimal type, and then in that world, these problems won't happen. So amount will automatically adapt itself to a decimal type if when a decimal type gets added to the language. But then in the current state, since we just have a number type in the language, that's the way that the math gets performed. + +WH: A bunch of what you just stated is incorrect. The same problems would occur if we were using decimal. The problem is not double versus decimal; the problem is multiple rounding. The solution is to do the arithmetic—it doesn't matter whether you use double or decimal—but with only one rounding step. + +EAO: So noting that in the design of the API as it currently is, we ended up not being able to discover any use cases for unit conversion that would not be satisfied by the precision given by doing the operations using numbers. If you are aware of specific use cases for unit conversion that do require precision beyond number, that are not currently supported by what we're presenting here, please do file an issue so that we can make sure that that is accounted for. But we honestly were not able to come up with anything realistic that needed precision that was not accounted for by what we're providing here. + +WH: I don't understand your comment. Nobody needs 17 digits of precision. That's not the issue. The issue is getting incorrect results, which are incorrect in the first or second significant digits if a user sets a rounding mode of, for example, truncate. Why is choosing a rounding mode even an option here? + +EAO: That is an entirely different discussion that we can consider to, for example, limit the rounding modes or to remove the rounding modes that we make available here, which is a very valid way of solving this. + +WH: Yeah. + +EAO: But I think what we need to what we would really appreciate is getting the specific issue filed in the repository so that we can be sure that that thing becomes accounted for in our considerations. + +SFC: Yeah. Responding to another thing WH said a little earlier about person height giving 79 inches, the CLDR usage for that should return the unit foot and inch. I think that we had considered for a little while splitting out the we call these mixed units in CLDR, splitting that out into a separate proposal. We decided just to keep it in the same proposal. But we have a path for this to solve this particular problem. Where person height is actually measured in two different units. And it's not just person height in the US. It's also in sometimes in some countries, it's meter centimeter. And then there's lots of cases where you have mixed units, degree, arc meter, arc second. There's many, many. So CLDR does account for this problem. And I would expect that solution to be also implemented here. + +JMN: I'm just following up a bit on what SFC and EAO have said. In the biweekly call that we have, which is often focused on amount, it's in the reflector. You're welcome to join. We've also discussed things like having some kind of rational arithmetic, which I think is ultimately what we're talking about here. To deal with cases where we know we have an exact answer, which is very simply representable. But if we do this all in number space in a naive way, then we get the wrong answer. We can even imagine something like a kind of I mean, I hate to mention it, but a kind of computer algebra system or a very baby computer algebra system here, which would actually address some of these concerns. Which are arising from things like multiplying by number, dividing by another, and then multiplying by that original factor. Which is going to get lost usually in number arithmetic. I think we're open to discussing these things, but settling on number is we think a reasonable compromise. Certainly, it keeps the implementation relatively simple. But if you'd like to file an issue and continue the discussion, we're happy to follow through on that. + +JRL: Okay. So I have two. First, can you go to the next slide? Which I believe is the minimum and maximum rounding thing that you default values? Okay. This one, yes. I think having a maximum fraction digits here is a mistake. This is a concern for formatting or the final output, not for the conversion while you are doing multiple conversions. This might be obviated if we follow Waldemar's suggestion where we don't have rounding modes during the conversion itself. This is like a final step, maybe. Then you would specify how many maximum or minimum digits that you want. The second concern I have is with the increasing of precision. So can you go back to the type definition slide? Yes. This one. During this conversion, I'm assuming all of our conversions themselves are effectively infinite precision, like conversion from hopefully from feet to inches or from meters to centimeters. We should be able to do that precisely without any form of rounding. But when you convert from one meter to centimeters, that does not mean you have a precision of 1,000 or a million or anything like that. Just because you multiplied it by an infinite precision factor. Having the ability to increase the precision of your input amount is a mistake, I believe. In either we should throw in that case if you're trying to increase you set a minimum conversion sorry, a minimum fraction or a minimum significant digits that is not supported by your input. I think that is an error case. Either it should throw or it should not increase the precision and just truncate to whatever your lowest precision amount is. + +SFC: So I was talking with JRL a little bit about this over lunch. One possible approach we could consider taking, which is very much compatible with the current shape of the proposal, would be to say that if you start with a number valued amount and you do convert to, you get back a number valued amount. And then if you start with a string valued amount, you get back a string valued amount. And then we just drop the minimum maximum fraction significant digits from the convert options. I don't know if it's champions we had considered that direction yet, but that would resolve that issue. And it would also maybe resolve Waldemar's thing about double rounding because then there wouldn’t be any rounding in the conversion phase. + +WH: This is a little more complicated than it seems. 0 °C has what precision? How many significant digits? When you convert it to Kelvin, you get 273.15. What should that look like? + +JRL: Sorry. WH, is that to me or to SFC? + +WH: That was to JRL. + +JRL: Okay. So for the Kelvin example, because it is a fraction, if you start from zero degrees Celsius and you don't specify the additional 0.00 or two sig figs of precision, three sig figs of precision, when you specify zero Celsius, and then convert it to Kelvin, I think the correct answer is 273 and not 273.15. Because you don't have that input precision to keep the output precision correct. You don't know if your exactly zero degrees or 0.1 degrees Celsius. + +WH: Okay. I think you'd find a wide range of opinions if you ask different folks about this case. No matter what you do, some users will be surprised here. + +EAO: In the API that we are proposing here, when you do conversion, the precision of the amount from which you are converting is effectively lost as information. The way we are proposing to do the conversion is that we take the value of the amount that we start with and we make sure that that becomes a number. And then we do number operations on that and we end up with a number. And then we apply because we've done these multiplications and divisions and it's because we're doing these with the number precision, it is highly likely that in cases, for example, converting feet to inches or inches to feet, for instance, or other changes that might end up applying operations that give you a very nearly exact value for what the mathematical operations would give you. We need to redefine the precision as a part of the convert options. And this is why the convert options have those minimum maximum fraction and significant digits to match the constructor's fraction digits and significant digits that we need to have this range here because unlike with the constructor, we don't know exactly what is the value that we started with. Because we are applying that conversion to it and we do not want to presume here an understanding of how do we map, for example, a precision in Celsius to a precision in Kelvin or a precision in inches to a precision in meters. But we require the user to define what the precision is that they want out of the converted value, effectively. + +JRL: And WH, you agree? + +WH: Yes, I agree with EAO. To give another example, if you convert 1.25 miles to meters you should not get the answer as 2011.68 meters with two fraction digits just because the original had two fraction digits. + +JRL: So even if you were to do that conversion from miles into meters or centimeters, the precision that you had it into the input, would it be the same as the precision on the output? Sorry. + +WH: I support EAO's position that the conversion should be as exact as possible and then the precision should be defined by the minimum and maximum fraction digits or whatever we do for the final presentation. + +JRL: Okay. I definitely support the conversion step here being effectively infinite precision. Whatever that number conversion stuff is, that should be done as precisely as we can possibly do it. But the input amount has a set precision. And what EAO had said is that they lose that set precision because of the conversion. And I disagree with that. You still have the amount that amount had a set precision at construction time. I think the output amount should carry forward that same precision or lower if you want that. But I don't think just because you did a conversion on an infinite precision factor, that does not mean you can increase the output precisions. + +WH: This gets tricky really fast. In the US it's common to have to measure fractional inches using binary fractions like 1/8 inch or 1 1/16 inch which is four decimal digits: 1.0625 inches. And if you convert it into metric, the micrometer-level precision implied by the one-sixteenth is inappropriate. + +DRR: I definitely agree that I find it surprising to say, "Oh, well, if you construct this thing, you'll only get the it won't run at the three digits," right? That seems surprising. I think the throwing on minimums would seem surprising to me. Just because a lot of the time, you don't know what your input is. And in order to know whether or not constructing the right minimum it in order to know whether or not this thing would throw upon construction, you would have to diagnose or check the value prior. So I think being more lenient makes sense to me in that respect. But I may be misunderstanding something there. + +CDA: Just noting we have 15 minutes left in the time box. + +SFC: Yeah. Just noting that the significant figures approach to doing the unit conversion works on a lot of cases. But it's also not the only way to do precision. And if you look on the repository, we have some examples of directions that we could go in order to support wait. My topic disappeared. I don't know what happened to it. I'll continue. In order to support the your use case is like the one we have here where zero degrees Celsius is not like three significant digits, but it has a certain error bar that are linear and not like and not multiplicative, right? It's always accurate within 0.1 degrees Celsius or maybe within a whole degree Celsius, right? And I had another topic, but it disappeared. So I'll just let OFR go first and put it back on. + +CDA: Yeah. Sorry. The cue got bugged. So if you were on the queue, please re-add yourself. + +OFR: Yeah. Just looking not in detail, but I think the new value representation makes also simpler. And avoids many of the complications that we talked about in the past. So I just wanted to mention that I think this is better than the previous iteration. It makes it composable in a sense. That we can talk about the value as a normal JavaScript object. And it also avoids some of the problematic use cases where suddenly some of for example, using the value as a numerical value and doing arithmetic on it would be super slow. And so I think this is a better iteration. Yeah. + +SFC: All right. I'm next on the queue. So first thing for the thanks for the comments, OFR. I wanted to talk a little bit about what I see as the as what is a stage one concern versus a stage two concern. So I think a lot of the discussions we've been having so far about that we've had in previous plenaries, for example, is this backed by a string decimal or is it backed by a number or is it backed by what exactly was the representation? I think that that's a great question. And we should be answering during stage one. I think what we've arrived at here the shape of Amount is a type with two fields that where the value field is one of the existing JavaScript types. The value is a unit, which is defined by CLDR. It has a convertTo function, which does conversions and it supports formatting to it supports formatting. It has a toString function. Those are that's the shape of Amount. I feel in my opinion and others are welcome to disagree with this opinion. My opinion is that these questions about how do we track the significant versus fraction digits across the convert to function? How do we resolve conflicts between them? How do we resolve the single versus double rounding? Those are great questions. I feel like those are questions that we could answer during stage two. I like to point out temporal where my temporal went from one to two. It was the general the shape of the proposal in terms of having objects for everything. And then there were still a lot of questions about how does decimal arithmetic work? Sorry. How does duration arithmetic work? Right? And there are some detailed questions that ended up kind of being big questions where you spend a lot of time researching them. We spent several years at stage two for that proposal. But in my opinion, these types of questions we're discussing about how do we resolve the significant digits during conversion? That's a good example of a discussion we can have during stage two. And others are welcome to disagree with me on that. But I just wanted to say that. + +EAO: This is mostly a question for implementers. As presented here, unit conversion requires us to do multiplication, division, addition, subtraction. We are now using numbers for those operations. Is there any concern for us to introduce operations here that would require doing—at least at the spec level—multiplication and division on mathematical values, effectively infinite precision operations? Possibly with the final rounding to number. But still, is this a concern? Or would this be fine to include in the specification and require implementations to do? + +OFR: So yeah. I think I much prefer doing the basically using an existing JavaScript type for doing conversions and all the numerical things. So I think it's a better design to do it in number space and not to do it in some non-262 space. + +SFC: There's no other topics. I assume BAN was about to ask this question anyway. But I just wanted to ask the committee thinks this proposal is ready for stage two. I know that there's the issue with the spec text was not being rendered correctly on GitHub through pull requests. And because GitHub pages was buggy and such. Right? So procedurally, process-wise, maybe we should come back again in May once WH’s had time to review the spec once others not just WH’s had time to review the spec in more detail. But is there anything else or does this committee feel that process questions aside that this proposal is ready in its current shape for stage two? + +SHS: Are you asking for a pulse check, SFC? Is that I mean, would that be useful here? + +SFC: We've asked for temperature checks in previous meetings. I'm asking for I mean, we could do a temperature check if we wanted to. But I was asking for if we come back in May, having worked with WH on getting spec issues resolved, then are we will we be ready for basically stamping this as ready for stage two? Is there anything else between where we are now and where we want to be other than resolving the making sure that WH and everyone else has time to review the spec in more detail? + +WH: Well, I mentioned some of the other issues, which is the combination of rounding and tracking rounding modes that's one sticking part. The other sticking part is feet-and-inches or meters-and-centimeters for international use cases. + +SFC: Okay. Great. So those two things you just mentioned, I think I addressed those a little bit earlier. But are those things that you feel must be resolved before we consider this a stage two proposal? Are those fundamental to the shape of the proposal? Or are those examples of items that can be continue to be discussed while the proposal's at stage two before it goes to 2.7? + +WH: I'd like to see the shape of the proposal. As I said earlier, my main concern is I don't want something to get into the language which produces answers that we’ll have to fix later via some flags. So I'd like to get this right the first time. + +EAO: So just checking that I understand correctly that WH you want to ensure that the value that you get after conversion is always correct. But that we have the problem that we need to do the conversions in some space. And we've just heard some implementer feedback that it would be best if that those operations were defined on numbers. Do you understand that we have a little bit of a do this thing or do this thing, but we can't necessarily do both of these things sort of a situation here? + +WH: I don't believe that's the case. + +EAO: Would you be willing to actively participate in our sketching of this proposal for its presentation in May so that we can be sure that we can get this figured out in a way where we can do the operations in number space for conversion while still fulfilling your criteria about this? + +WH: I could give you feedback on the proposals. The main thing to do is avoid double rounding. There are numeric techniques to do that. + +EAO: Okay. Let's continue this on GitHub issues. And we do have a recurring numerics call on which it would be great to have you on board. So that we can iterate on this. So that in May, when we present this with these very small changes, we can get this to go forward. + +WH: Okay. Do the changes include support for feet and inches for person height? Or what's your feeling on that? + +EAO: My sense is that we want to seek out exactly what we're doing here and what we're doing in 402 versus 262 during stage two to some extent. And that right now, for instance, the concern that you raised about the person height does currently given the specifications in the units XML give you person height in foot and inch, which is a unit that currently Intl NumberFormat does not support. Whether to add support for foot and inch into Intl NumberFormat is something we can consider as a separate part of this, for instance. + +WH: It's kind of a large change to do after reaching stage two. So I'd like to see what the direction is when going for stage two. + +SFC: I just wanted to call if anyone else had opinions on the mathematical number versus the mathematical value versus number space arithmetic. I know that OFR has voiced support for numbers. I see Dan on the queue. + +DLM: Yeah. So I think we'd support we prefer number as well. I mean, I could double-check current implementation. But I suspect that, yes, we would prefer number rather than having to implement mathematical operations on them into a mathematical values if they don't already exist. + +WH: I think it's a false dichotomy. I'm not sure if you're assuming that I'm trying to push infinite precision or unlimited precision arithmetic here. That's not the case. + +CDA: All right. Nothing else on the queue. We are almost at time. + +BAN: All right. Well, since I've got one minute. So to make sure I understand correctly, WH, some but not all of this can be solved by restricting the rounding modes, correct? + +WH: Yeah. If you get rid of the truncate and expand rounding modes, then you at least solve part of the 7 feet equals 83 inches problem. + +BAN: But that is not by itself a way to resolve all of the problems. + +WH: That's for the conversions, we're kind of digressing here. But there are mathematical techniques you can use while still doing IEEE double arithmetic that give you exact results rounded to nearest IEEE doubles. + +BAN: I just want to say this was a really, useful session. I'm glad that we went with the hour-long time box on this one. + +CDA: All right. Well, that brings us to our break. We will see everyone back here at 15:15. + +### Speaker's Summary of Key Points + +* Unit conversion functionality re-inserted into proposal. CLDR data used +* Amounts now store Numbers, BigInts, or numerical strings, depending on input. Numerical string used iif precision options set + +### Conclusion + +* Continue to work with WH on rounding issues +* Return in May, seek stage 2 at that meeting + +## Intl: Keep Trailing Zeros for Stage 3 + +Presenter: Eemeli Aro (EAO) + +* [proposal](https://github.com/tc39/proposal-intl-keep-trailing-zeros) +* [slides](https://docs.google.com/presentation/d/13zyaDAuriW2A4pwzklf0EZhOSUUmWqZxyIjzCcjWkXU/edit?usp=sharing) + +EAO: So hi. I'm EAO and presenting on Keep Trailing Zeros, which is a bit of a prequel to what we were just discussing in part in Amount. To start, a refresher for what has been presented earlier. The whole idea here is that when the NumberFormat and the PluralRules format and select endpoints currently already support a decimal string representation of numeric values. And specifically, these support, at least in theory, trailing zeros. But in the current behavior, we are ignoring these trailing zeros. And this proposal is about fixing that. So it's about changing the behavior of `Intl.NumberFormat` and PluralRules when they are asked to format a digit string representation of a number. + +EAO: So with talking about a change to the internals of `Intl.NumberFormat` and `Intl.PluralRules`, and not any really external APIs. There's a small option value that I'll detail later, at least very shortly, that does become a part of this change. And it's important to note that the treatment of formatting number of BigInt values would not change and the existing formatting options like maximum fraction digits, significant digits, those would work still exactly like before. + +EAO: So that the behavior we're looking to affect here is as presented here in particular for the second and third lines here of the example. Where if we're formatting with a minimum fraction digit of one and the default maximum of three, we would end up at least with one trailing zero. This we already get. But that when the number of trailing zeros would be included in the range between the minimum and the maximum fractional significant digits, those would be included in the output. And they would be, of course, capped at the maximum limits as they are currently. And the small externally visible sort of API change that this proposal does carry with it is that there is an existing option trailing zero display that would get for which you would be able to use a new option value `”stripToMinimum”` which would make `Intl.NumberFormat` behave the same way it currently does with string digit input values. + +EAO: This has all been presented a bunch of times before. And this is a proposal that's currently at stage 2.7. And there are two issues here that I'd like to raise and for us to discuss. And depending on how we resolve these, it might be applicable for this to advance to stage three. The first of these issues is the behavior when we are rendering very, very small values that become zero. So for example, here specifically, we are formatting a number 1.2e-10, which with the given number format options here, of maximum fraction digits five, becomes formatted currently as `”0.0”`. And this 0.0, two digits of precision here is coming from the fact that the input value here 1.2 has two digits of precision here. And the question that needs to get resolved is, should we instead for something like this where the precision, the input is given as 1.0, as a very small number, end up matching the maximum fraction digits precision instead of the current behavior? + +EAO: And the second question is related to, what happens with the treatment of trailing integer zeros? How do we understand them when we are passing the input effectively? This only is an issue when we're using exponential notation. So scientific or engineering in the output. So right now, when we get an input like 700 as a string, we understand this to have three digits of precision where we end up with an output that includes 7.00E2 in the output. The question here is, should we discard the integer-only trailing zeros and format in this case 700 as a string as 7E2? Note that there is no proposal to change anything about the formatting of 700.0 as still containing four digits of precision. Or something like 701 as containing three digits of precision. + +EAO: And effectively, discussing these two issues is the only blocker that currently exists before asking for stage three for this given that the tests are pretty well covered. And there's a PR for the tests for test 262. And there is a polyfill I've written that implements the logic as presented here on top of FormatJS. + +EAO: And so in particular, it's this first issue where if we want to change the behavior from the current of what it's doing when we're formatting very small values as zero, if we do need to go to the maximum fraction digits representation in this particular case, a bit of spec work, sufficiently enough spec work might be required that it might be best to ask for stage three only at the next meeting. So if any of you have questions or topics on this, I would be very happy to go to the queue. Before that, thank you very much in particular, RGN, for a lot of the work in making sure that we have the spec at this level and he's the one that identified, for example, this issue as one that we do need to consider. I think WH, your first one up. + +WH: First, a question. What happens if instead of 1.2E-10, you have zero in there? What do you get? + +EAO: You mean 1.0? + +WH: No; just have zero inside the single quotes here? + +EAO: A zero that has one digit of precision. So it would format just a zero. + +WH: OK, it would format as just 0. Would it change to 0.00000 if maximum fraction digits of 5 were given? + +EAO: No, the behavior of formatting numbers or bigints would not change at all in this proposal. + +WH: So I don't understand this then. You're proposing to change this to 0.00000. But in what situations? + +EAO: Only when we are formatting a decimal string representation of a number that is sufficiently small that with the current number format digit options, it rounds to being a representation of zero. + +WH: Okay. + +EAO: I mean, to be honest, my personal preference would be for us not to do this because it would be so much simpler. It's that if this is a concern that we ought to have this behavior that somebody does want—and RGN, I think, does. At least then I would be happy to make the change. But I would be also quite happy to keep the current behavior. + +WH: Okay. I still don't understand when this change would apply. You're saying it's when a value is sufficiently small that it would display as zero. But zero is sufficiently small that it would display as zero, which you said wouldn't apply. + +EAO: There is no change of exponent for zero that would be applying. It's another question of really how do you—maybe it's best if I let RGN reply here. + +RGN: Yeah, thanks. So this has to do with string inputs. And you're looking here at the string `”1.2e-10”`. So 1.2 times 10 to the negative 10. And if instead that string was simply zero, no decimal point, then there would be no fraction digits to truncate. But if that string were 0.00000000012, that's mathematically equivalent to what you have here. And the bottom behavior would preserve that equivalence. The top behavior does not. + +WH: Okay. So this is just truncating the fraction digits. So if a string were 0.0 with 17 zeros, then then the result would be what? + +RGN: As it is on the bottom. + +WH: As it is on the bottom. Okay. In that case, I would be in favor of the bottom result. + +SFC: I'm a little confused because if you did spell out the number all the way with 0.0000012, that has two significant digits. And that's basically what we're tracking now. The spec, as I read it, drops all leading zeros including those after the decimal point up until you read the first non-zero number. It only retains them if the number is zero. Unless my reading of the spec was not correct. + +EAO: Yeah, that’s right. + +SFC: So in that case, the first example here is consistent. + +RGN: What we're looking at here isn't limiting significant digits. It's limiting fraction digits. + +SFC: Yeah, but I mean, if you had maximum digits for maximum fraction digits five and you were using this as, for example, a number input, then I think you would just get zero. And now we return 0.0, which is two significant digits because that's what we had in the input string. But I don't see any world in which the bottom output is ever expected. + +RGN: Would that apply equally if the spelling, instead of including an exponent, did not include an exponent? Because they have the same count of significant digits and also the same mathematical value. + +SFC: If you spelled it out 0.0000012, I would expect the output to be either the top or just the number zero. I would not expect it to be the bottom. And I don't think it is in this current spec. + +EAO: The top behavior is what we currently have. + +RGN: For clarification, when you say current spec, you're referring to this proposal or `Intl.NumberFormat` as it exists? + +SFC: I'm referring to this proposal. + +EAO: Same. + +RGN: Right. The top value there accurately describes the current behavior of this proposal. Which is called out by issue number 15. + +DLM: NRO is on the queue. + +NRO: Yeah. I think the current behavior proposal, so 0.0, is probably the worst of the possible alternatives. Because I look at that and it is two significant digits. And the explanation is that its two significant digits come from the input. But they're just two different significant digits. Like there is nothing about the input that tells us that the rightmost significant digit is the first one after the comma. Which is what we're getting from this output. So I think that logic is just wrong. I can understand the logic under the third result, where it's not actually telling us we have five significant digits. It's just getting that five from the maximum fraction digits. So just cropping at that five. Without trying to say how many significant digits there are in that result or which one is the rightmost significant digit. So I prefer the bottom output to the top one. I would also probably find just zero acceptable if others prefer it. + +SFC: Yeah. I'm next on the queue. I think that if the rounding settings given to `Intl.NumberFormat` cause all significant digits to be dropped from the input string, then maybe we just treat it the same way as we would treat a number that got all its digits dropped. And then just it is zero and you just format it as if it's zero. And we don't try to do anything else smart there. If we were still retaining any of the digits, then of course, I would expect the bottom one to be the case. But if everything just goes to zero, it doesn't make sense to me that the bottom would be the case. + +EAO: WH, RGN. Very quickly, would that actually work for you two as a solution? + +WH: No. + +RGN: No. + +EAO: Okay. Thank you for the input on this one. I'm running out of time box. Would like to get any input, if anyone has, on the second question as well. My preference here would be to not change the current behavior. And to format to retain an understanding of the integer trailing zeros when they are included. And not to lose it. Kind of want to get an understanding if anyone has thoughts on this before we run out of time. + +WH: Are we going to the queue? + +RPR: I think we're going to NRO to skip over. + +NRO: Yeah. I prefer the current behavior here. Intuitively, if I see 700, I think of it as being precise, like 700 meters is exactly 700 meters. So I would prefer to keep the first behavior. And if somebody really wants to say only the seven is significant, they can probably use 72 as the input there. + +RPR: WH? + +WH: And I'm from the opposite camp where if I spell out a billion as one with nine zeros after it, I would prefer to get 1e9 instead of 1.000000000e9. + +EAO: But would you have that as a string input that you would expect to use like this? + +WH: Sure. + +EAO: Would you be opposed to the current 7.00E2 behavior? + +WH: I think it's suboptimal. I would not object to the proposal if you did that. But I think it's worse than 7e2. + +EAO: Okay. SFC? + +SFC: I just wanted to voice support for 7.00E2 because if you I don't think it makes a lot of sense if you have 699 and then you have 700 and then you have 700 to 1. When you have numbers in the hundreds, I definitely see them as being integers. And I think that that's the more intuitive result. I think if we don't know what's the more intuitive result, we have methods like TG5 to help us tell that. But at least in my opinion, I tend to see 7e2 as being the more niche case. And 700 meaning three significant digits would be in the more common case. And I think we should probably lean toward what is the more common case here. And if you really want 7e2, then you can write 7e2. When we get up to a million, then maybe I can sort of see WH's perspective a little bit more. But at least for values in the hundreds, it doesn't make sense to me at all. + +EAO: Cool. What I am getting out of the replies here, given that I'm pretty much over my time box, if I understand right, is that issue 15 should probably be resolved in some manner respecting the changing the current behavior towards the proposed one. With details to be determined. And the overall sentiment that I'm getting for 17 would be that probably we should continue to treat trailing integer zeros as significant. Given that 15 is going to require some spec text change and we're going to need to change some of the internals of what information we are storing throughout the process, and given that this proposal is specifically about spec text internals, I'm not going to be asking for stage three here. But we'll come back hopefully at the next meeting with a proposal to solve 15. And then ask for stage 3 at that point. Thank you. + +RPR: Thank you, EAO. That was a good summary. + +### Speaker's Summary of Key Points + +The proposal’s current state was presented, including two open issues (#15 and #17). Stage advancement was not requested. + +### Conclusion + +* Issue #15 should be resolved with a change as requested. +* Issue #17 should be resolved by retaining the current behaviour. +* The proposal will be brought up for advancement at a future meeting. + +## `JSON.parseImmutable` update + +Presenter: Peter Klecha (PKA) + +* [proposal](https://github.com/tc39/proposal-json-parseimmutable) +* [slides](https://docs.google.com/presentation/d/1NXb3HWjt0BiVJl2SZlAVeSvZIBlzO1z9X28hJL28w7s) + +PKA: Hello. I'm PKA. I'm from Bloomberg. And I'm here to present about `JSON.parseImmutable`. This came up in the last meeting, during the proposal review. + +PKA: Some background in case you somehow forgot all about `JSON.parseImmutable`. This was a proposed addition to the standard library, which takes in a standard JSON string and parses it and returns an immutable version of what `JSON.parse` would return. This proposal was previously tied to records and tuples. The idea was that it would return a record or a tuple as appropriate. However, records and tuples has been withdrawn. So the question here is how to move forward, or whether to move forward with this proposal. + +PKA: There are some options. One is that we could withdraw this proposal. That would be perfectly fine. But it seems like parsing directly to an immutable object is still useful. And at Bloomberg, we have some uses for this. So some other options include that we could tie it to the composites proposal and return a composite. Composites is intended to replace certain aspects of records and tuples, especially the use of immutable objects as Map/Set keys by value. However, composites do not include a tuple counterpart. So it's not clear how we would handle JSON arrays in that case. And composites are also only shallowly immutable. So it's not obvious that there's a lot of value here compared to just doing it yourself. A third option, and this is our preference, is to return a deeply `Object.freeze`d plain object or array. + +PKA: So yeah, I guess I'd like to just pause here and ask whether the committee has any feedback on that. Does anyone else feel that there are use cases for this or not? Does anyone feel strongly about returning a vanilla object that has been `Object.freeze`d deeply versus a composite or something else? + +MM: So yeah. So first, just a clarifying question. About when you say composites are only shallow, composites are, but parseImmutable when you suggested that it could return a composite. I assumed you meant that it could return a deep composite. + +PKA: Yeah. I guess that's also another option. That's not what I had considered. But sure. Yeah. That's a possibility. + +MM: Okay. That would require obviously addressing the tuple. But I think this is but for JSON, even top level, we still require addressing the tuple. + +PKA: Yes. Yeah. That didn't occur to me immediately. Again, given kind of like the use case of composites as keys, I don't know actually if ACE has a similar thought here. The idea of having a deep composite yeah. It wasn't what I first thought of. But sure. Yeah. That's totally an option. + +MM: Okay. In any case, whether it's a deep composite or just a deep freeze, I am in favor of this. And in particular, deeply frozen or deeply composite plain data, the thing you get from `JSON.parse` without any intervention, without any procedural intervention, is something that can be safely shared as the completion value of the import between multiple importers. So that's why I think that there's still—So if you do this, I would also like to revisit import type: “json” and see if we can directly import an immutable JSON. + +PKA: Yeah. It's interesting you bring that up. I mean, that's related to our use case for this. So yeah, I'll just say it's yeah. We're interested in those ideas too. + +MM: Great. Thanks. + +RPR: NRO? + +NRO: I just want to mention that ideally, this would have an `{immutable: true}` or `{frozen: true}` or something like that import attribute right now. Import attributes only support string keys and nothing else. But the reason is just that we didn't have a use case for a boolean. If there is a use case for a boolean, it would be entirely appropriate probably to have a normative PR adding support for booleans in there. + +MM: I'm sorry. Support for booleans where? I didn't understand that. + +NRO: In import attributes. So you can say `{type: “json”, frozen: true}` or something like that. In the syntactic space for if we use type JSON imports. + +PKA: Oh, wait. Yeah. Import. Do a JSON import that's deeply frozen. + +MM: Okay. Okay. Thanks. + +MF: Okay. Okay. I'm not seeing why composites are at all appropriate here. Other than you were previously tied to records and tuples, and they kind of transitioned at the same time away. That's just not what this is for. And I'm pretty opposed to using composites just because this is a handy immutable thing that we have lying around. They're meant for a very specific purpose. And that is not this. This is a very generic thing. So we shouldn't do composites. Is anybody in favor of that? And can explain why they're in favor of that? Otherwise, I think we should just rule that out right away. + +MM: I can explain a reason for it. Whether I'm in favor of it or not, I haven't decided yet. But a reason for it is that composites are not tied to the realm of origin. So they can be passed between realms in with no more without any further danger than passing primitive data. So they're like records and tuples in that way, which is you don't have to do a deep copy in order to safely pass them between realms. + +KG: That’s also true of frozen objects, right? + +MM: No. Frozen objects inherit from `Object.prototype` and `Array.prototype`. Which are the `Object.prototype` and `Array.prototype` of origin. And whether you're in a position where those are safe to leak or not is certainly on a case-by-case basis. But largely, it's going to be accident-prone by people who don't think about that deeply. + +KG: Composites do inherit from `Object.prototype` in the current proposal. + +MM: I don't believe so. + +ACE: So I'm going to I can't remember off the top of my head. But either way, I think composites are only stage 1. So if they currently don't inherit from `Object.prototype`, there's no guarantee that would be the same in the future. I think there's been voices for either way to me. I think the two bigger things here are as PKA mentioned, the lack of Tuples. Saying it's JSON parse but no arrays feels wrong. + +ACE: Yeah. And also, MF's point of the use case of composites is primarily around keys. And `JSON.parse` is more about data. So really, it's you might see the two near each other like a map where the keys are composites and then the values in the map are frozen JSON. So they might be near each other, but I think the use case is slightly different. + +JHD: Yeah. I agree with what MF’s saying. I just wanted to add that there was a time when records and tuples and possibly even composites were very generic. And very broadly usable across many use cases. Far beyond the motivations for the proposal. But many, many constraints that have arrived over the course that history have changed that. And I don't think they're currently broadly applicable enough to make sense here at all. + +JSL (on TCQ): A +1 to either option 2 or 3. But prefer deeply frozen. + +DLM: And then next up, we have NRO. + +NRO: Yeah. So if we end up returning frozen objects, I would prefer to rename this thing to parseFrozen or something like that. There are multiple concepts in JavaScript for what immutable means. And if we're just relying on the “this thing is frozen” definition of immutable, we might as well put in a name so that it's clear to developers what we actually mean. + +PKA: Yep. I agree with that. And the next slide is going to be some further issues, assuming we go with two or three. + +DRR: Yeah. I think I agree that composites probably don't make sense as much. I would prefer objects. I guess given that this proposal is moving towards just using frozen objects, I wonder if you have examples, or I would encourage looking to examples of internal code where you use `Object.freeze` as maybe the reviver and see how prevalent of a pattern this is. I think this is more of maybe a nicety plus possibly more efficient for engines to implement. So that may be one of the motivations here. And instead of just using standard JSON to parse. So that may be worth looking into as further motivation for this proposal. So I'm pro, but I would encourage that. + +PKA: Yeah. So I agree if we can find that kind of evidence, that would be great. Although I think it's also worth thinking about the extent to which this is a thing people don't do because it's a pain, but they might do if there's an option for it. And it's potentially a lot of benefits to doing it. + +JSC: Oh, yeah. I missed that, NRO. I added the same thing. I also would suggest renaming it, but it sounds like you have a question about that. That's all. + +SFC: Yeah. It seems to me like what you're proposing here is you care very much about deep immutability. And that's what `Object.freeze` is for. Composites is all about shallow. Now, you could build up nested composites or something like this. I feel like you might want to have a composite in the composite proposal. You might want to have composite build deep or something, which takes a non-composited object and makes it into a composited object or something like that. And that seems like in scope for the composite proposal. But it seems like regardless of whether or not we have that, being able to parse a non-composite object with that's frozen seems like a useful feature to have in the language. + +PKA: Okay. Great. Thanks for that feedback. It seems like there's broad—I don't want to say consensus, maybe, for proceeding with the proposal and going with the `Object.freeze`. I am intrigued by MM's thought about the realm thing. But I'm guessing we will bring this back in the guise of option number three. + +PKA: Some other remaining issues that I'm not going to resolve here today, but just to put them in people's minds. And if anybody does happen to have feedback about them now, that's great. Is parseImmutable still the best name? We've just heard feedback that no, it isn't. And I think I agree with that. That immutable is too generic for this case. If we go with, for example, the `Object.freeze` option, it should probably be parseToFrozen or parseFrozen or something like that. And how should custom revivers be handled? Should it be that you can't give back an object, or you can, but it must be deeply frozen itself? Or you can and they get frozen or it's a bunch of other options. So just something to think about if anybody has ideas about that or thoughts to share. Throw yourself on the queue. Oh, I do see that we do have MM. Go ahead. + +MM: Yeah. So I just want to add to the options on custom revivers. My favorite option, which is that parseImmutable, unlike parse, just does not accept a custom reviver. That it doesn't support custom revivers. And custom revivers have been a pain for engines anyway because otherwise `JSON.parse` can be quite fast. And anything that causes it to fall out to user code kind of destroys that fast path, from what I understand. Although I am not an implementer. + +PKA: Yep. That's a worthy option as well. + +KG: Just wanted to point out that we just added the parse-with-source-text thing, and it only works with revivers. It was presumably motivated, and if it was motivated for `JSON.parse` then it's motivated for JSON.parseImmutable. + +MM: Oh. Oh, that's a very good point. Okay. I withdraw my suggestion. + +DLM: Okay. KG, you're up again. + +KG: I wanted to suggest just having an options bag option instead of a separate name. When we were going to return a Record it made sense to have a different name, because we try to avoid the return type depending on arguments, but if it's just frozen that's arguably the same type so an option might be cleanest. + +JSC: Just wanted to point out that `JSON.parse` appears in hot paths a lot: every time an event-handling loop receives a message in JSON, etc., etc. I'm fine with an options bag as long as it wouldn’t have a performance cost. But I just want to point out that this is something that shows up in hot paths a lot. So if that impacts the choice of an options bag versus a separate function, there it is. + +PKA: Yeah. Thanks for that feedback. And that's an interesting idea, KG, about the options bag, totally worthwhile option to look into. + +RPR: Okay. Anyone else up? I would just say we have about two minutes left. + +SFC: Yeah. Sure. Just thinking a little bit about the accessibility of the programming language we use. The word freeze already has these semantics—I could have checked before I posted this if we used the word frozen already in the spec. If we do, then maybe that's okay. But most of the time when we have singular and plural words, they're the same root word. It's just like you add an “-s” to the end or you add an “-ed” to the end. But if you're completely changing the word, it's not necessarily clear that these are the same concept. That freeze and frozen are actually the same thing. Just inflected differently. + +JHD: `Object.isfrozen`. `Object.freeze`. + +SFC: So if you want to follow that precedent, it seems okay. + +PKA: It is an interesting and. + +JHD: Also `Object.preventExtensions` and `Object.isExtensible`. Which would be even weirder. + +PKA: So one final question I want to bring before the committee is should this proposal stay at stage two? Or is this enough of an evolutionary change in the proposal that we should return to stage one? I think the default would be to stay. The inertia will keep us at stage two unless someone feels that it should go back to stage one and we're totally comfortable with that. Sorry, what was that? + +MM: I would support it staying at stage two. + +DLM: Any objections to it staying at stage two? + +PKA: Okay. That concludes the discussion. We'll bring it back at some point to—Oh, no. Sorry. Somebody’s on the queue. + +EAO: I think given the scope of the changes being applied here, would moving back to stage one would be appropriate? But I don't feel like that is a requirement. But I would let's put it this way. If someone were to propose moving this back to stage one, I would support that. + +DLM: Okay. Well, unfortunately, we're at time. So I don't know how to resolve this particular. I mean, I guess we'll have to resolve it offline. + +PKA: We'll talk it over offline. Thanks, everyone. + +### Speaker's Summary of Key Points + +* `JSON.parseImmutable` might still be useful even though its original companion proposal, Records & Tuples, has been withdrawn +* Questions about name and reviver behavior are still open + +### Conclusion + +* There was consensus in the committee for keeping this proposal going forward return a vanilla deeply frozen object; although concerns about sharability across realms were raised +* There was some agreement that the function should either be renamed, or collapsed into an overload of JSON.parse +* `JSON.parseImmutable` will return + +## TypedArray find within for Stage 2 + +Presenter: James M Snell (JSL) + +* [proposal](https://github.com/tc39/proposal-typedarray-findwithin) +* [slides](https://github.com/tc39/proposal-typedarray-findwithin/blob/tc39-plenary-march-2026/slides-export.pdf) + +JSL: TypedArray find within. All right. Basically, asking for stage two advancement. Just kind of recap where we started. Just basically what this is, is finding a subsequence of elements within a TypedArray rather than just an individual element. It is a common use case in the ecosystem. We got this to stage one in Tokyo. There's spec language there. There's “did hear from editors” a bit before 2.7. There's definitely some changes there. And I have an open PR to kind of flesh some of that language out now, but I don't think that PR is necessary to go to two. I definitely think it blocks 2.7 now. + +JSL: Solution right now, we have three methods: search, searchLast, contains. I'm not happy with the naming here. So if we can bikeshed the naming, fantastic. The search basically, it's forward the position is you're finding the first instance of this thing, search last. You're finding the last one, contains. It's basically just a predicate return true/false. Needle types. Right now, it is type arrays and iterables. Other than string, string would just be an error right off the bat. The idea here is that it is—I can't remember the name of it. But what is it? SameTypeZero? I think it is. + +JSL: Okay. Spec text status. Like I said, there is drop spec text. They're in the repo. Introduces a couple new abstract operations. Basically, I mean, the idea that there is a basic algorithm like if you look at the new PR, there is a basic search algorithm there. + +JSL: But the idea is design decisions since talking about it originally. searchLast was added. position parameter on all three methods. Needle accepts any iterable, not just the same type. String needles throw. Needle is snapshotted. This is specifically to deal with SharedArrayBuffer, which we can talk about more in just a minute. And then wrong type needle elements return -1 rather than throwing. Wrong type there, I mean, if you try to pass a FloatArray, to search a UintArray or something like that, where semantics don't match up. + +JSL: All right. Some of the open issues. Is `contains` justified? This one came up. If we have `search` returning -1, do we actually need a true/false? Return? Definitely a legitimate question, I think, for some ergonomic cases. Having just a `contains`, just have it where it returns true/false, does make sense. But I think this is something we can debate. + +JSL: Whether it should throw. Or is returning -1 for the wrong types okay? I think this one is relatively uncontroversial. But if folks have opinions on this. + +JSL: SharedArrayBuffer-backed needles. Actually this causes a bit of a challenge. Right now, in the spec text, I have it where the haystack is never snapshotted, right? It's just a live of whatever the thing is there. So if it's a shared array buffer, it can change out from underneath you while you're searching it. I do have it, though, where the needles are snapshotted. So at least one part doesn't change. But we need to discuss this and figure out if this is something we actually want. In chatting with it, just to hear a little bit earlier, one of the other options here is we just say SharedArrayBuffers aren't supported here at all. And we just don't accept them in this operation. But it does add some complexity here. + +JSL: I think there are some ergonomic questions to ask. And answer about whether we need SharedArrayBuffer here. None of the use cases that I've encountered need shared array buffer in this at all. It's included here. Primarily for completeness. Not because there is a strong motivating use case for SharedArrayBuffer here. + +JSL: And then as I said, smaller ones we've already addressed in the repo. So the main thing, the main one, is `contains` justified, throwing versus the return, and SharedArrayBuffer. We can go to the queue if anyone has any. WH. + +WH: The type of the needle introduces a bunch of footguns here. This wasn't an issue back when the needle had to have the same type as the haystack. But now that needles can be arbitrary iterables, that introduces a bunch of problems. Users may be surprised to not find things which they'd expect to be there. An example would be to search a 64-bit int array for a needle with some integers in it. Or if you search a Float32Array for, let's say, 1.1. This wasn't an issue back when the needles were required to be the same type as the haystack. I think a better interpretation of this is, if we want to allow needles which are just arbitrary iterables, then this should behave as though you had created a needle of the same type as the haystack using the iterable to initialize the needle and then did the search. This would fix both of those cases. + +JSL: Yeah. Right now, the spec text—I'm trying to remember, it's been a few days. I'm pretty sure we do normalize it out. The check is, I think, at the SameTypeZero. So we do adjust that. But to be honest, I'm not all that thrilled with supporting arbitrary iterables here. This came up in the discussion of some of the feedback in the repo, and was added based on that feedback. Because I couldn't come up with a strong enough argument not to. There is an ergonomics argument in favor of supporting iterables. Personally, none of my use cases that motivate me bringing this use iterables at all. And I'd be perfectly happy ripping those out. + +WH: The issue is it introduces silent bugs unless you coerce the needle to a TypedArray of the same type. If you search for a needle containing the same integers in a haystack that's containing 64-bit integers, you'll silently never find them. + +JSL: Yeah. Okay. We have one reply from KG. Then we have four items on the queue, so. KG? + +KG: We could say if the iterable contains items of the wrong type, then we throw. That solves the problem of searching for a Number in a BigInt array at least. It doesn't solve the problem of not finding 1.1 in a Float32Array, but that's OK, it doesn't contain 1.1. + +WH: Yeah. I'd prefer to just avoid these problems by logically coercing the needle to haystack’s type and then searching for that. + +KG: Coercing means if you search for the number 500 in a Uint8Array, you might find it, and that’s at least as weird in my opinion. + +WH: That's what you get when you create a Uint8Array with the number 500, right? + +KG: Yeah, but I didn’t create a Uint8Array. + +WH: So let's avoid having iterable needles at all. Just go back to searching only for ones of the same type. + +KG: I think that searching for iterables is important. Allocating TAs can be expensive. Having to allocate a two-byte Uint8Array just to search for a delimiter in some serialized message is potentially more expensive than just doing the search manually. + +JSL: NRO? + +NRO: One of the use cases you mentioned was when dealing with streams. I actually recently had a use case for this exact proposal. Or I thought so. When I was trying to search for something in a file. And I was getting a readable stream of Uint8Arrays out of it. There is a big footgun there which is when trying to search for something in a stream, you need to be careful about what happens when your needle is split across multiple chunks. This is still a good primitive, probably. But you need to do some chunk combination, or if your search fails, then you need to manually keep track of partial matches from the end of your Uint8Array. I wonder whether we might want to have an API that gives you this partial match indexes from the end. Or whether you have an experience with Buffer in Node—I don't know if Node streams use buffers. But if that's an issue there. + +JSL: I'm trying to think back of the issue tracker. I don't recall anyone specifically bringing it up. Buffer does have this kind of API. I had it for a long time. It's never come up in any kind of issue tracker or bug or feature request that I've seen. Whether or not that actually means anyone hits the problem, I obviously think that it's hard to say. But it is a valid point. + +NRO: Yeah. It's important to say difficult problem to notice when you have it. Because frequently, you have large chunks of small needles. And so you don't really notice that, oh, I have the wrong implementation. So yeah, it's just something to be careful about. + +JSL: Yeah. I mean, the problem is not unlike just encoding issues. With multi-byte encoding. So it's a similar type of problem, so. Okay. We have a reply from KM. + +NRO: It's a reply to the previous topic? + +KM: Yeah. This is actually a reply to the previous topic. I didn't get it in before we switched to the next one. But I guess I don't—Your iterable has to allocate an object for each item in the thing that you want to iterate over. So if you want to search for a two-byte thing, you're probably actually allocating more, by using an iterable, rather than actually allocating a new array. Just simply because each of those objects is going to be bigger than—at least I mean, I can speak for JavaScriptCore. But the TypedArray object is the same size, I think, as a regular object. And so you're probably allocating more total bytes just iterating a two-element array in general. + +KG: I thought JavaScriptCore put TypedArrays on the heap. + +KM: The TypedArrays are in the heap. Yeah. But there's a two-byte backing store in the heap. Well, it'd be 16 bytes because allocations around it are 16 bytes. Allocation in the heap. And then a 16-byte object—it might be 32-byte object—pointing to it. But your object you're done value up iterator value most of the time will be two properties in the object plus your object that you references that backing store. So you're actually and that's for one item. In your list. + +KG: I am assuming that engines do something smarter for arrays than using the actual iterator machinery. + +KM: So yeah. I mean, people try. I mean, we do that for the most part. But you still have to the iterator object, you still have to allocate the wrapping iterator driver will still be allocated. And so then that is also probably about the same thing. So it's not clear. + +KG: You don’t have to allocate the iterator driver. + +KM: You don't have to. But I mean, it's pretty painful to not do that. Because then you have to have code that handles both that object existing and not existing. So I don't know if it matters much. I think the iterable I guess in the end, it seems like a lot of premature optimization there. + +KG: OK, with that feedback I'm OK dropping iterable arguments for now. We can always add them back later. + +JSL: Okay. MF, you have two items on the agenda? Thank you. + +MF: Yes. So naming is hard. And with this thing, I feel like without a name that strongly indicates that we're taking a subsequence, it's going to sound like consumers are going to get confused whether we're passing an element or a subsequence. And I just want to see, even if we don't arrive at something that we can all be happy with, I just want to see that there's been work to try to find a name that has that implication. I know that right now it's `search`. That doesn't really indicate to me that it's a subsequence. And before, it was findWithin. And that strongly indicated to me that it was not a subsequence. But I don't know. All names are bad. I want to see that we've explored the space. + +JSL: I went through maybe about a dozen possible names. And all of them sucked. So I'm open to. + +MF: Okay. Yeah. Well, if you've done that work, it'd be nice to present that to the committee. So we know that the diligence has been- + +JSL: `search` was the best one I could come up with. + +MF: That's really unfortunate. Okay. Then moving on to my next topic. + +MF: So it seems like we have not figured out whether `contains` is included in this proposal. That seems significant. That seems like something that we should do before stage two. That would double the size of the proposal from two to four methods. I don't particularly have an opinion on whether it should be included or not. I think in that issue, we discussed it would be about how common we think the uses would be of these methods relative to each other, if we had them both. I just think it's something we should figure out before stage two as part of the process. + +JSL: I actually agree. If we do advance to stage two and we decide that `contains` does not belong, then I would ask for conditional stage two. And I just would remove that from the spec. Yeah. So. RBN. + +RBN: I just wanted to point out that when we look at things like indexing into an array that doesn't contain an element, or in a TypedArray, finding the index of something that's not a number or isn't even coercible to a number, we generally don't throw for those and just return -1. So I think it's fine to return `-1` for things that aren't valid. The only time I think we might want to throw is if there is any reason why you pass in a TypedArray and we think comparisons won't work because of alignment issues. But beyond that, I don't think there's really a reason that throwing makes sense. Only because it just stays consistent with how we do `indexOf` and other types of find operations when we're dealing with invalid types as the needle in those cases. I think the closest example of this would be something like string. + +JSL: Yeah. Okay. OFR. + +OFR: Okay. So still being debated on the chat as well. But I'm wondering if we support ArrayBuffers here. I'm actually not sure. But with the concat thing, I'm pretty sure. That if we support ArrayBuffers, then it will be hard to write a spec in a way that doesn't make basically the algorithm observable by probing ArrayBuffers. And then that would basically limit us to the implementation that would be written in the spec, which could be kind of undesirable. + +JSL: Yes. I think the observability of searching a SharedArrayBuffer, having it be the haystack, is less of a risk. But as I said, none of my use cases require it. I'd be very happy just to say `SharedArrayBuffer` is off the table for this. And neither the needle or the haystack would be `SharedArrayBuffer`. + +WH: Yes, it's potentially observable. I don't think it's a problem in practice. But I can design pathological cases where for example, if you move a snapshot of a needle forward through the array or through the haystack, then depending on which direction the search algorithm goes through a haystack, it might miss it or it might never miss it. But these are pathological cases. And I don't expect those to arise in practice, so I'm not worried about it. + +JSL: So the question there, then, with that, is `contains` justified still an open discussion? I think we haven't settled on that. I think there's—I'm not hearing any objections to just returning the minus one to as opposed to throwing. Unless somebody believes strongly otherwise. And then whether we should support shared array buffers is still an open question. So KM. + +KM: Yeah. I mean, there's already a bunch of existing TypedArray algorithms that work with SharedArrayBuffers. But they do weird things if you modify them from another thread while they're executing. And in theory, the implementation would be observable. In practice, TypedArrays are—there's not really any real win you can gain from trying to do something clever in a lot of them. So you it's not super clear to me that it would be a particular problem, it's not clear why it would be more of a problem here than it would be for those other algorithms. + +JSL: It shouldn't be more of a problem… the question is more, should we continue that problem continue to practice or should we try to make it better? And that's, I think, that's where it comes down to. I don't think anything here this one or concat, I don't think makes the SharedArrayBuffer problem any worse. + +KM: Sure. I agree with that statement. I don't have super strong opinions on it. I think it would be a bit weird to not be able to do it if you because the footguns are kind of the same. That's the other ones. And we already have accepted those foot guns and haven't heard horrible complaints about them. And most people using SharedArrayBuffer are probably aware of these kinds of foot guns, hopefully. + +JSL: Hopefully. + +KM: That may not be true in practice. I mean, you never know. + +JSL: RGN? + +RGN: All right. Running down the list. I would prefer not to include `contains`. Both because it can be built up from the other aspects, and because the name is misleading. My opinion would flip if we found a name that clarified that it was spanning across elements rather than being limited to single ones. Fourth, throwing versus returning -1 on wrong types. If we're not accepting iterables, then the types themselves are sufficiently locked down that I would like to throw. For SharedArrayBuffers, I agree we've already accepted the foot guns. Multi-thread consumers already have to do synchronization on their own. So if they have such synchronization, then the utility of applying these to SharedArrayBuffers seems like it would be applicable in those cases. And not mentioned here, but for the streaming use case, when the needle spans multiple chunks, I do think it's important to have an answer for that. Even if the answer is “you must build your own solution in user space”. + +JSL: Okay. That's fair. Let me ask this, does anyone think that `contains` is absolutely necessary for this? Okay. Seeing that there are folks thinking that it's not needed and nobody is sending up and down arguing that it is, we can remove that and simplify the spec a little bit. So fantastic. + +JSL: The other question then the two openings then are accepting iterables and the SharedArrayBuffer. From the SharedArrayBuffer, it sounds like for accepting the foot guns, okay. I'm not hearing anybody, I think it'd be nicer to not have them. But I'm not hearing anybody saying that we absolutely shouldn't. So I think we move forward with having those in. I do think that there is the question there of, is it necessary then for us to snapshot a SharedArrayBuffer needle? We're already not snapshotting the haystack. That's just from a performance perspective, it's not going to be feasible. There's going to be too big. Needles are typically a lot smaller. They're a lot easier to snapshot. But do we need to? KG. On the queue. + +KG: I think we should not snapshot. If you're using a SAB you're opting in to doing your own synchronizing. + +JSL: Are there any arguments against that? + +JHD: Yeah. I don't care about the `SharedArrayBuffer` case, right? It's fine. Do stupid things, win stupid prizes. But the snapshotting the needle is more than just about `SharedArrayBuffers`, right? It's also about the—or I guess, is the concern that there'd be no user code running that could possibly change it between the snapshot point and the end? If that's the case, there's no need to snapshot it. + +JSL: Yeah. And it also goes back to the iterables question. So if the needle is an iterable, then user code can run while you're collecting it. In which case, snapshotting is necessary. If we say iterables aren't there and it just has to be the same type TA, there's no opportunity for the user code to run and we're good. + +JHD: I mean, even if it didn't have to be the same type, right? It's just okay. Then yeah. Not snapshotting seems fine in that case. + +JSL: We have SHS. + +SHS: I guess I missed, not quite sure what you're talking about. So if you don't snapshot the needle and the needle changes, is it expected that implementation would use the new needle as if it changed in the middle? Or because I mean, I would imagine implementations would be building index tables, whatever, off the needle to potentially make it more efficient search. + +JSL: Yeah. The idea is that the search is not observable and won't be impacted by things like that. So I think we need to—I think that's one of the open questions we need to figure out. + +KG: The algorithm would probably be specified as doing a read of the whole needle at every point in the haystack, but these are not synchronizing reads so implementations would be allowed to return any answer consistent with the memory model. + +DLM: I just wanted to jump in and say we only have two minutes left. + +JSL: Okay. So the question I think based on this conversation, there are changes that need to happen. So stage two is probably not there. The question is, is there enough here for if I'm able to make these changes in the next couple of days, do we have enough for conditional stage two? And the changes specifically would be remove `contains`. Keep the SharedArrayBuffers, but no snapshot. And based on the conversation I'm hearing, remove the iterable needle. And just keep it TypedArrays. + +MF: I don't think it's quite straightforward enough to just do a conditional stage two at this point. I'd prefer you just—we have a couple of days. We just do an overflow slot at the end if we have time where if it's then prepared for stage two, we can do an advancement. + +JSL: Sounds good to me. SHS. + +SHS: So did we resolve the question about even if we get rid of general iterables, if it would be throwing or negative one to have a searching for Uint8Array versus an Int16 array haystack? + +JSL: No. We have not resolved that. And if we remove iterables, what do we do with throw or -1? + +I guess that's still an open question. + +DLM: Yeah. And we're at time. So we should save that for a potential overflow. + +(see day 3 for continuation of this topic) + +## Iterator Includes for Stage 1, 2, or 2.7 + +Presenter: Michael Ficarra (MF) + +* [proposal](https://github.com/michaelficarra/proposal-iterator-includes) +* [slides](https://docs.google.com/presentation/d/1a-1RQayP-tcJd2VAhFGoiL_dRT8gD8_qrp__qSgC1A4) + +MF: All right. Iterator includes. So this is a new proposal. Going for many possible stage advancements. So I feel that the motivation for this proposal is pretty straightforward. It is the same motivation that we had when we added `Array.prototype.includes`. You can already do this kind of thing with a `some` with a constant comparator. But it more clearly expresses the author's intent and is a very common operation. There's also the added motivation that choosing a comparison operation is a surprisingly difficult thing to do in JavaScript. Because we have four choices that are very similar that have very subtle differences and will matter sometimes, but not matter other times. And you don't know when you really have to commit to thinking about that. And when it does matter, the committee has chosen a default already in `Array.prototype.includes` which is SameValueZero. So you'd want to use that when it does matter. But it's actually not easy to write that. The easiest way today is just to use `Array.prototype.includes` to write it. You can also do two other operations that don't technically use SameValueZero, but are equivalent, which are written below there. On the second and third lines there. But it would be super weird to see code that does that. You wouldn't be like, "Oh, this is doing SameValueZero." You'd be like, "What is it trying to say?" So we should just make the thing that we think people should do the easy thing to do. So I would like stage one on that problem statement. + +ACE: +1 + +KG: +1 + +RGN: +1 + +PFC: +1 + +DLM: Okay. Any objections to stage one? Oh, there is a question from EAO. + +EAO: So yeah. You asserted earlier that asking whether an iterator includes a value is a relatively common concern. Do you have any evidence for this? Or is this just the sense of, given that this is very common for arrays that it therefore can be considered common for iterators as well? + +MF: No. I don't have data on this. And actually, in one of the open issues DLM had asked for that as well. But I've not had the time to go look since then. And it was only a day or two ago. I would be fine with a request to find such data. I mean, the reason why I was motivated was I had to do this a couple of times. And I was like, "Oh, yeah. We're just missing this core functionality that people kind of expect to have". But also, I think finding it would be looking for this pattern that we see here with using `.some` today. And I think that iterator helpers are only starting to ramp up in their usage. So it might be a while before we could actually get that data. So it might push this proposal down the road a year or something like that. I feel pretty confident from my own personal convictions. But I don't think I have a great way to convince others about that now. + +KG: It's not that you want it on iterators, it's that you want it on iterables—you want to check if a Map contains a particular value, or a FormData contains a field of a particular name, or whatever. It's just that `Iterator.prototype` is our way of working with generic iterables. + +MF: Exactly. In my personal experience, it was working with NodeLists. Which are iterables but not arrays. + +EAO: Thank you, KG. That was a really good way of rephrasing this for me, at least, probably others, too, as to how to think of this proposal. + +DLM: SFC on the queue + +SFC: Yeah. I was just quickly looking at other programming languages. The Rust iterator type has a `find` which returns an index. And `any` and those things. But not `includes` or `contains`. Python, on the other hand, has the keyword `in`. Which is like it's so important that it's a keyword in the language that does the boolean check. So I was wondering if you did more research in where we stand relative to other programming languages. And whether they sort of consider these types of questions as well. + +MF: Yeah. I haven't. I typically do look for prior art in other languages with all of these iterator proposals I have. I was not expecting that there was going to be a question of the motivation here. So that's on me. I would say, though, that JavaScript has this unique motivation that I explained earlier on the slide that the comparison is not exactly straightforward. So it's not everybody just writing what's here on this top line with just use of the triple equal. So it's more strongly motivated in JavaScript because that comparison is more complicated. So even if it wasn't super common in other languages—which I do expect it to be—but even if it wasn’t, I think it might still be motivated in JavaScript. But that can certainly affect your decision at the moment without seeing prior art to go to stage one. + +DLM: And EAO on the queue with +1 for stage one and a message. So far, we've only heard support. Are there any objections to stage one? Okay. Congratulations. Stage one. + +MF: Thank you. I have more. + +MF: So I want to discuss the design space, of which I don't think there's very much. I don't like the name `includes`. But `Array.prototype.includes` exists already. We didn't even like the name `Array.prototype.includes`. We didn't want that. That was not our first choice either. But I think that matching precedent is important here. This actually goes directly against what I said in the previous presentation about, it does not strongly imply whether we're matching a subsequence or an element. It could be either depending on what your background is. Still, I think that is the name that should be chosen because it matches Array and people might already be familiar with that. + +MF: For comparison operation, there's not a really strong argument for a different comparison operation than what `Array.prototype.includes` uses either. So SameValueZero for that. The only real question is whether it is worth it to include this second parameter, the secret parameter to `includes` that nobody knows about. And it doesn't really make sense. This is the index to start the search from in `Array.prototype.includes`. In iterator, it would be how many values to skip before actually doing the comparison. We know that by name in iterators as `drop`. You can just do a drop and then do an `includes`. And that would be the same thing. But because of familiarity with the `Array.prototype.includes`, we say, "Okay. Well, let's include it". And then we use our modern normative convention of throwing on non-numbers and NaNs and negative numbers. And then that's the whole design space that I was able to identify. + +MF: This is what it would look like if that was the design. So here's a generator. It yields two things. The number 5 and the number 3. And if we create an iterator, and then ask `includes`, it includes three and five but not four. And then if we do the little drop parameter, it will include three and not five if we drop one. And it will include neither if we drop two. This is what the full spec text would look like if it was exactly as I've presented. There's not much. It's really straightforward. And I would like stage two for that design. + +JHD: So I had a clarifying question. I think your spec text may have answered it. But it sounds like what it does is, it consumes the iterator until the iterator is complete or it finds the value being looked for. There's an implicit drop if the second argument is passed. And then it closes the iterator. So what is the use case? Let's say you have an iterator from a generator. From anything that's not a built-in iterator producer. Checking includes means that you can't it checks it. But it also poisons the iterator. So there's use cases where you want to check if it's in there. But then you don't care about ever getting the value out? + +MF: Yeah. Of course. I have an iterator that I mean, again, going back to my own experiences, I had an iterator of HTML elements in a NodeList. And I just wanted to see if the HTML element that I had from some other search that I did is in this new NodeList that I just created via a query selector. I got this node out of a query selector before. I made a new query with a different query selector. And I want to see if it's in that iterator. + +JHD: Okay. So this particular use case is one where you don't have a query selector all type thing to pick it out. You just have a node identity instance. + +MF: I have some element. + +JHD: Like an actual node. Yeah. + +MF: Sorry. Terminology in HTML. I don't do much web programming actually. But— + +JHD: Okay. So yeah. Thank you. That clarifies given that you can dot drop, skip elements, dot includes. I'm not sure how much value there is in the second parameter. But it's also fine to have it, I guess. + +RGN: I had a similar concern there. And I'm reluctantly in favor of the second parameter. Precisely because of the analogy with `Array.prototype`. Thank you for anticipating it. And I ultimately came to the same conclusion that you did. However, on that same note, this proposed spec text diverges in behavior because it requires an integral number rather than doing ToIntegerOrInfinity. I think that since it's there just for alignment, we should probably align exactly. + +MF: That one's really hard for me to come around to. Can you—is it just because you want to be as close to Array.prototype.includes? + +RGN: Yeah. If we're not going to match it exactly, then we shouldn't have the parameter at all. Because drop is so much more clear. + +MF: With those new constraints, I would probably be on the side of not having it. + +RGN: Also fine. + +MF: I know. Okay. Okay. + +RGN: That's all. + +PFC: I'll say something along the same lines but coming to the opposite conclusion. I think we can't use a negative index in the skipped elements parameter, whereas in `Array.prototype.includes`, you can because you know where the end of the array is. So I figure if we're already not going to match that, then what's the point in having the second parameter? But I also don't care that much. + +KG: We can't simply not have the parameter. We could in principle throw if you pass any argument, but we can't have a situation where someone does `.includes(x, 5)` and that searches from the beginning of the iterator. + +RGN: All right. I'm reversing my position. I found PFC's explanation compelling. You can't do negative anyway. You're already diverging. We can actually incrementally push the language forward even though it would be better to not have it at all. + +MF: Awesome. + +EAO: I would like an exploration of using Python syntax for this. The `in` operator specifically. So you have `needle in iterator`. I am perfectly and completely willing to believe that there are good reasons why this would not work. But I think it would be appropriate to do that exploration during stage one of this proposal before advancing to stage two. If this concern can be dismissed completely out of hand immediately, then that's entirely plausible. But I kind of hope that there might be a way of implementing that syntax for us as well. + +MF: To be clear, are you asking to use the existing `in` operator with an iterator on the right-hand side? + +EAO: So I'm asking for a good reason why we would not be able to use the Python syntax. Well, we have `needle in iterator` as the syntax for testing this. + +KG: It already means something. + +MF: Yeah. It already means something. We can't change what it means. + +KG: It checks if a key exists in an object. + +EAO: And we could not even consider what to do if the thing on the right side is an iterator specifically. + +KG: I would be strongly opposed to changing the behavior of the `in` operator. + +EAO: That is sad. And unfortunate. + +MF: I mean, we can't even say if the thing on the right-hand side is an iterator. That would just mean having these string-named keys. You could maybe tie it to some new protocol that is based on symbols that `in` defers to. But yeah. I don't see how that could possibly work. + +KG: At the cost of slowing down every existing use of the `in` operator. + +MF: Yeah. + +SFC: All right. Yeah. I'm glad we briefly explored that space. I think we've—Okay. Next topic I have on the agenda is—This might be out of scope. Because this probably impacts other parts of other operations as well. But if we have a composite, is the expectation that if you call contain or what's this? Includes then the composites would match? Is that part of the intent? And would that be in scope of the composites proposal? And would we basically, I'm asking because I want to make sure we're not designing ourselves out of the ability to do an includes that would allow composites to work the way that you want them to work. + +MF: I think this is more of a question for the composites proposal. Because they're working on their notion of equality. And this is just using the same notion of equality that `Array.prototype.includes` is using. So however they decide that they're going to be defining composite equality for the operation that `Array.prototype.includes` uses, which is SameValueZero, then this will do the same. + +SFC: I guess to rephrase what my concern might be is that if we add `Iterator.prototype.includes` that does object equality, and then but composites needs composites equality, then I don't want the community to be like, "But I don't want to make all calls to `Iterator.prototype.includes` to be a little bit slower because we need to check if it's a composite or not." + +MF: Is it the case that right now composite equality is using a method to do their equality comparison? Or are they—in a strict equality comparison operation, are they using— + +SFC: I mean, I think we're last meeting we were discussing about if it's interned objects or something like that. But I mean, ACE would know a little bit more about that. I just want to make sure we're not designing ourselves out of the ability to use contain composites. For this, I'm just it's a stage yeah. Is this a question that I thought would be appropriate to ask at this stage? And if your answer is it's in scope for the composite proposal, not this one, then that's a fine answer. + +MF: I do think that. I also do think that it is a possible stage 2.7 thing, not a stage two thing. But oh, ACE has a reply. + +ACE: So I mean, the latest presentation they did on composites is that I'm currently leaning towards them being interned. So they'd be triple equal. So then this would just work with zero spec changes at all. It just falls out of the thing. Obviously, there are other ideas, is that they're not interned, but `Array.prototype.includes` sniffs and branch checks on these things. And then does a different equality operation, which, yes, has technically non-zero overhead. But in my experiments, effectively zero overhead in practice. So yeah. As what MF said, the intention is it will work in `Array.prototype.includes` however that happens, and I would expect it would also work here. + +CZW: I would like to ask for a clarification on the motivation of the proposal about how could it be practically useful? For example, I feel like this is encouraging people to iterate multiple times if they find that something in the iterator after using the includes, because `includes` does not return a useful item inside the iterator. And if people found that something is contained in the iterator, they need to iterate the iterator again to actually extract the item which is not the same as an array because you can index access an array. + +MF: So to be clear, they are passing the item into `includes`. So they already have the item. They don't need to iterate again to get it. This isn't taking a predicate. It's not like `find`. But you could make that argument if you were talking about `find`. But we already have `find`. + +KG: Or findIndex, which we don't have. + +MF: Yes. But we're not proposing that. We're proposing `includes` today. Does that resolve your concern? I feel like—I don't want to have sounded dismissive about it. + +CZW: I don't feel like I'm against it. I'm just a little bit unclear about it. + +MF: I'm trying to figure out if I can quickly unconfuse you. Yes. So if I was just checking that an iterator would yield this value, I don't need to then iterate the iterator to get this value. Because I already have it. And it's useful to have known that that iterator yielded that value, on its own, just for certain questions, like what I was giving an example before of “I have this query selector that I ran, would it yield this value that I already know?”. I don't need to know where it is, what the relative elements are around it, how many were yielded before it or would have been yielded after it? All I need is to know that it was present. + +RGN: Yeah. I've got another example for that, which is file system contents. If I'm looking through a directory or a directory tree, maybe I care about the presence of node modules, .git, readme, etc. Any number of entries all I care about is, was it found. Because then I can classify the input into one bucket versus another. And it doesn't matter to me how many elements I had to visit before finding the match. + +MF: Okay. So we're at the end of the queue. I would like to ask for stage two for the design as presented, even though we went back and forth on it a couple of times through that. We did arrive at what was originally presented. So do we have consensus for stage two for this proposal design as presented? + +DLM: ACE has a clarifying question. + +ACE: Ironically, I’m helping with the notes but I actually don't know what you said. What is the behavior of the second argument? + +MF: It's all up there on the screen. + +ACE: What about the second? + +MF: So the second one is passing it to drop before doing it. + +ACE: And we throw if– + +MF: And yeah. We throw for NaN. And we throw for negative numbers. And we throw for non-integral numbers. The only reason those are split out a little bit is because we throw TypeError or RangeError as appropriate. + +ACE: Thanks. + +ACE: Yeah. We have two minutes left. And Shane has a clarifying question. I thought we had agreed to drop it. And now we have to add it back again, the second argument. I'm a little confused. + +MF: No. You must have zoned out for like 30 seconds there where we went kind of around in a circle and arrived back at where we started. RGN was convinced quickly. + +DLM: We have a plus one from RGN in the message. But it'd be great to hear a second voice of support for stage two. + +JHD: I'm assuming that the main thing that motivated the turnaround was KG's comment about the bug proneness of just omitting the second argument. + +MF: No. It was not entirely that. It was also PFC's comment about how we have to be different about handling negative integers. + +JHD: Okay. Yeah. Then I'm in support of the design as is. + +ZB (on queue): support as well. + +DLM: So it sounds I should ask for running out of time. But any objections to stage two? + +RGN: I'm also happy to be a reviewer. + +Okay. Perfect. Thank you. All right. So as a reviewer, do we need more than one? I can't remember. + +MF: Yes. I would like two stage two reviewers. Preferably, those who have already reviewed the spec text. + +RGN: Which is everyone in the room. + +MF: Because it was up on the slides. + +RGN: I volunteer as a reviewer as well. + +MF: Great. Have you reviewed the spec text? Okay. Could we do stage 2.7? + +RPR: Yeah. We're at time. How do you want to handle this, Chris? + +MF: We have 15 more seconds. + +DLM: Do we have support for stage 2.7? We have RGN with support. Another voice of support, please. + +JHD: Editor review? + +MF: Oh, you're asking for 262 editor. + +JHD: Yes. As far as stage two reviewer review, it seems good. + +MF: I can pull a BAN and say, "I'm a 262 editor." + +CDA: Yeah. So I think what we're talking about here just to be clear for everyone, part of the entrance criteria for 2.7 is that the assigned reviewers have signed off on the spec text and the relevant editor group has also signed off on the spec text. The question is, are we meeting that bar? RGN is nodding. JHD? You have signed off on the spec text? + +JHD: I have. + +CDA: Okay. You got a plus one from KG. There's a question from PFC. But I mean, we're past time. You're okay with? + +PFC: Can I do like 10 seconds? I support this as part of having a full-featured iterators library in the language. I was wondering if in a future meeting you could give a presentation about your iterators roadmap. + +MF: I'd be happy to. I didn't do that because I figured people would be bored by it. + +CM (on queue): What could possibly go wrong? + +RBN: As a newly minted TC39 ECMA 262 editor, that's editor-approved as well. + +CDA: The editor's group has officially signed off on the spec text. Plus one from Shane. Did you want to speak, Shane? + +SFC: Stage one motivation and stage two design shape are the hard ones. Stage 2.7 is pretty easy if we agreed on one and two. + +CDA: All right. Do we have any objections to advancing to 2.7? Seeing nothing on the queue. Sounds like you have stages 1, 2, and 2.7. + +MF: All right. Thank you, everybody. + +### Speaker's Summary of Key Points + +* valuable to to have an includes function on iterators +* not easy to do yourself today without an includes method +* best to match Array where possible: name, SameValueZero comparator, skipped elements parameter +* going to follow new normative conventions for parameters/conversions + +### Conclusion + +* Advances to stage 2.7 diff --git a/meetings/2026-03/march-11.md b/meetings/2026-03/march-11.md new file mode 100644 index 0000000..b7c3174 --- /dev/null +++ b/meetings/2026-03/march-11.md @@ -0,0 +1,1265 @@ +# 113th TC39 Meeting + +Day Two—11 March 2026 + +**Attendees:** + +| Name | Abbreviation | Organization | +|--------------------|--------------|--------------------| +| Ashley Claymore | ACE | Bloomberg | +| Aurèle Barrière | AUR | CNRS | +| Chengzhong Wu | CZW | Bloomberg | +| Chip Morningstar | CM | Consensys | +| Chris de Almeida | CDA | IBM | +| Daniel Minor | DLM | Mozilla | +| Dmitry Makhnev | DJM | JetBrains | +| Eemeli Aro | EAO | Mozilla | +| Gustavo Tonietto | GTO | Mozilla | +| Guy Bedford | GB | Cloudflare | +| Istvan Sebestyen | IS | Ecma | +| James Snell | JSL | Cloudflare | +| Jeffrey Posnick | JPO | Bloomberg | +| Jonas Haukenes | JHS | Univ. of Bergen | +| Jonathan Kuperman | JKP | Bloomberg | +| Jordan Harband | JHD | Socket | +| J. S. Choi | JSC | Invited Expert | +| Keith Miller | KM | Apple | +| Lea Verou | LVU | OpenJS | +| Linus Groh | LGH | Bloomberg | +| Nicolò Ribaudo | NRO | Igalia | +| Olivier Flückiger | OFR | Google | +| Philip Chimento | PFC | Igalia | +| Richard Gibson | RGN | Agoric | +| Ron Buckton | RBN | F5 | +| Samina Husain | SHN | Ecma International | +| Stephen Hicks | SHS | Google | +| Waldemar Horwat | WH | Invited Expert | +| Aki Rose Braun | AKI | Ecma International | +| Jason Williams | JWS | Bloomberg | +| Andrew Paprocki | API | Bloomberg | +| Daniel Rosenwasser | DRR | Microsoft | +| Justin Grant | JGT | | +| Kevin Gibbons | KG | F5 | +| Kris Kowal | KKL | Agoric | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Peter Klecha | PKA | Bloomberg | +| Ruben Bridgewater | RBR | | +| Rob Palmer | RPR | Bloomberg | +| Shane Carr | SFC | Google | +| Ujjwal Sharma | USA | Igalia | + +## Opening & Welcome + +Presenter: Ujjwal Sharma (USA) + +RPR: We'll start with our favourite activity. Who would like to volunteer for note-taking? Thank you, LGH. We'll put that hand up over there in the room. People who are remote, you're also welcome to volunteer for note-taking if you wish. We just need one more. + +## Temporal for Stage 4 + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-temporal) +* [slides](https://ptomato.name/talks/tc39-2026-03/) + +PFC: Hello, everyone. My name's Philip Chimento, and I'm a delegate for Igalia, and we're doing this work in partnership with Bloomberg. And I'm very happy to be finally getting up here and showing a slide with the title "Temporal for Stage Four." (applause) Well, save that for when it actually gets approved. + +PFC: just wanted to share some late-breaking news. Just in time for this, the province where I live has decided to get rid of daylight savings time once and for all. We changed our clocks this past Sunday for the last time, and will now be on permanent daylight time, which is just another reason why you should never use the POSIX "isdst" flag. + +PFC: So this is a stage four request. The entrance criteria here are: two compatible implementations, which passed the test262 acceptance tests. Significant in-the-field experience with shipping implementations, such as experience provided by two independent VMs. There's a pull request for ECMA 262 or ECMA 402 as appropriate in this case, both are appropriate. And the relevant editors group has signed off on the pull request. So I'll be going through each of these requirements and talking a little bit about them. + +PFC: It occurs to me just now, rather than while I was making the slides, that maybe I should have done a review of what Temporal is. I feel like I've been up here presenting things from Temporal so many times that I didn’t need to, but just in case there are any new people. Temporal is the JavaScript API for handling dates and times in a better way than the existing Date object does. It's a fairly large API that handles exact times, wall clock times, and durations. So there is a large API surface to cover with tests, which I believe we have done in test262 now. For a long time we've been looking for all the edge cases that come up and adding tests specifically for those for the edge cases we've encountered. And I've used a snapshot testing technique to go over millions of inputs. I'll have more on that in a later slide. And there are quite a large number of tests, I think about 5,000 in the test262 repository, which is, I think, about 10% of the whole test262 corpus. + +PFC: And we have at least two implementations at about 100% test conformance. There is a little surprise here on this graph as I was updating it yesterday based on the latest nightlies, which is that the Ladybird JavaScript engine jumped ahead, and is also hovering at about 100% test conformance now. Apparently this just happened in the last two weeks since I originally made the first version of these slides. So what you see here, the red bar is test conformance with the Temporal tests, not including Intl Era Month Code and the blue bar (or hatched if you are colorblind) is conformance with the Intl Era Month code tests, which are related to Temporal, but is a different proposal that went to stage four yesterday already. So you can see we actually have three implementations that are around 100%, on both of these proposals. + +PFC: As for in-the-field experience we have, as I said, two browser-based implementations that have been shipping Temporal unflagged to the web for several months. Firefox has had Temporal for almost a year, actually, and we've had it in Chrome and other V8-based browsers for a couple of months now. It's unflagged, in several other implementations. I actually have no idea why I put Ladybird under non-browser engines. It is a browser engine, my apologies. And there's an implementation in GraalJS, which is scheduled to be unflagged in the next release, but I'm not aware of any release date for that release. + +PFC: We also have several polyfills for the proposal, ranging from several years old to quite new. There's the Fullcalendar Temporal polyfill, the author of which is kind enough to show up as an observer today. So there's one that was that the proposal champions put out a while ago and then there's a new one that's been released, I think, just last month, which is smaller and includes less time zone and calendar functionality. Which is permitted by the spec. + +PFC: So the proposal went to stage 3 in 2021. This was before we had a stage 2.7. Compared to a lot of the proposals that we see today, it had an unusual journey. But I do think that the long stage 3 has been beneficial in a number of ways. Having it flagged on the web and having these community-authored polyfills, there really have been a lot of improvements to the proposal, which couldn't have happened if we hadn’t had sort of this long community period for such a large proposal. So you know, we had spec bugs that were noticed by implementers in polyfill authors. We had these same people looking at the spec and noticing opportunities for improving the spec algorithms. This was especially relevant when we still had custom calendars and time zones and there were a lot more user code calls than there are now. Then you know, there's always edge cases, and I've managed to be surprised a few times along the years like the edge case of Toronto in April 1919 jumping from 11:30 PM to half past midnight in a weird daylight savings time transition. I want to take a moment to thank everybody who's made contributions like this. It's been really helpful and I think it's made the proposal more robust and better. + +PFC: I said I would talk a little bit about the snapshot testing technique. I've mentioned it in a few earlier presentations, but this is a technique that's also been really helpful for surfacing bugs and weirdnesses in either the time zone data or the proposal. or the implementations. What we did was taking lists of data, like interesting years, interesting time zone transitions, interesting months, and combine those into tests of millions of inputs, and then run that through one implementation, make a snapshot of all the outputs, and then compare that against the outputs when you run the same set of inputs against a different implementation. And if they're not the same, then something's up. And at the very least that should be a test262 case. That's been very helpful. We filed almost 20 bug reports and implementations over the last couple of months when we were doing this and also managed to fix some of them, and we got a lot more targeted test262 coverage that way. I really found this technique helpful, and I think other proposals might benefit from it too, even if they're not as large as Temporal. So if you're interested in that come talk to me. + +PFC: I wanted to talk a little bit about web platform integration. We have a pull request open for structuredClone support for Temporal objects, which seems to have interest from two implementers. And there is an issue in the proposal repo with other integration ideas. Such as being able to pass a `Temporal.Instant` as the expiry time of something or other, or for example passing a `Temporal.Duration` to setTimeout. Now, the WHATWG process moves in a bit of a different cadence than ours, and uses different criteria. The big thing there is user stories and implementer interest. So if you want to see some of these things happen go to these issues, and give use cases. If you're interested in implementing them, also comment that there. I would love to see some of these things happen, we're not going to consider it a requirement for stage four because Temporal works just fine without any of these web platform integration things, and it's already useful by itself. + +PFC: All right, there are two pull requests open for the spec text. There's one for ECMA 262 and one for ECMA 402. The ECMA 402 one also includes the Intl Era/Month Code proposal. It was easier to put them in the same pull request. So that went to stage 4 yesterday in a separate presentation. The ECMA 262 one is interesting because, as you might know if you've contributed to ECMA 262, everything is in one giant file. I've set this pull request up so that everything from Temporal is not in one giant file. This seems to be a good moment to start doing that. But we'll talk a little bit later about how exactly we want to publish it. But anyway, the pull request that's open is assuming that we're going to put it right in the same document, but in different files that get included at build time. + +PFC: Editor sign-off. So we've discussed the open pull request with the ECMA 262 editors. several times in the editor call. and we have an open question about how to publish it, which we'll get to in a moment. The ECMA 402 editors, so I know that usual and Ben have reviewed it. And, RGN is a proposal champion, so I assume he's familiar with it, but I'm hoping that we can get an official sign-off from you in the meeting. + +PFC: So a little bit about the future plans. What do we do now? We've been talking over the past couple of days about how the Temporal API and spec text should evolve in the future. And as I mentioned, how should the spec text get integrated into the current ECMA 262? About five years ago when we went to stage 3 and had to declare a bunch of things out of scope, we started collecting ideas for future expansion in an informally named "Temporal V2" repository, which has about 36 issues open that people have contributed over the past five years. Almost all of them are incremental and narrowly scoped. There's no large design spaces like we had with Temporal. And so we expect that updates to the proposal will be in the form of somebody wanting to add an epochMicroseconds property to `Temporal.Instant` or something like that. So the kinds of updates that I would expect in the future, they'd be about the size of `Array.fromAsync` or a `Math.sumPrecise` or something like that. + +PFC: We've talked with the existing champions group and one option that was raised, which we'll talk about shortly, was to run a new TG. We're not really eager to do that after having run the champions group for years. But we're happy to review and support any follow-up proposals and we'll support any fixes to issues that people find. So we're not planning to go anywhere. + +PFC: We've discussed three options for how the spec text should get integrated. We can stick it in ECMA 262 like any other proposal. We can give it a separate HTML page in the published ECMA 262. So this would be a part two standard in ECMA 262-2. Another option is to make it a separate standard which we'd request a number for. and it would be ECMA 430 something. We talked a lot with the previous editor group and yesterday with this new editor group. and MF offered to contribute some slides and talk about how the editor group sees this. So MF, would you like to do that now? Do you want to come up here or do you want to bring the microphone over there? + +MF: Okay. Let's go to the first slide. We have two considerations here. I'll just step back a little bit. I know PFC covered most of the state of things. But you know, when the Temporal champions group came to the editors a couple months back and asked what we were thinking about how we do integration, the editor group at the time gave a lot of feedback. It's very large, it's going to really affect a lot of our editorial concerns with the spec. But also, we have concerns about our ability to maintain it. As it may change or, if that is what happens in the committee. And our ability to maintain it as we make other changes to the document, refactoring, that kind of stuff. We didn't have confidence in our ability to do that. So we had some recommendations that we wanted to run by the committee. + +MF: So this first part is considering whether Temporal should have a separate standard. A TG that is responsible for maintaining that and running approval by TG1 very much like TG2 does. And then also have its own assigned editor group. So for this we put in arguments on either side of that question. I don't want to read them off to you. But I guess I kind of have to. So in favor of a combined… + +\[short disruption in transcription\] + +MF: If we have a separate TG responsible for Temporal then it's all of our responsibility for ensuring that we maintain some expertise there, right? We would not be able to have the TG with no assigned convener. So that just becomes more structurally ingrained in us. Similarly, with Intl and TG2, a lot of the members of TG1 that are not in TG2 don't care to discuss all the little nitty-gritty details. We want to see what the conclusion was at the end and do approvals. We also think that, considering readers again, that a lot of people who care about Temporal just care about Temporal and want to read that. And a lot of people who care about a lot of the other parts of the language won't want to read into the details of temporal. Obviously there are implementers who will be looking at all the bits of it, but that is still helpful. And like with Intl and like with other certain parts of 262, there are other specs that want to reference just Temporal bits or we'll talk about regex later, but other specs that just reference regex bits. We think it would be better structurally to have them just have to reference that temporal spec instead of all of 262. Not a huge deal, though. Can we move on to the next slide? + +MF: So when talking about just whether we have separate documents and separate Git repositories, this is not a strong argument. In fact, it's kind of an argument in the other direction. A combined repository and combined documents has a lot of benefits. It's easier for us to maintain editorial consistency because we can just work in one big document or at least if it's split over multiple documents, we can work at least in the same repo. And just finding within files and stuff, it's just easier to manage that way. Also, if we have to split it out, there's going to be things that are shared, shared abstract operations between these documents. And it's not clear yet where we would put those things. Would we have a W3C infra style spec? We'd have to figure that out. It's not a problem. We could make circular references. That is fine. But it's not ideal. We also know through experience that often when developing proposals, when people reach for existing AOs, that actually prompts normative conversations where people were thinking that some minor normative behavior of their spec would work one way, and because they're using this shared infrastructure, it actually is slightly different and actually makes the language more consistent that way. So we could have possible normative consequences from our decision with that. Not strictly following our decision, but through process. And of course, it's much better to maintain just one CI system, one set of dependencies, and that kind of stuff. + +MF: Minor argument in favor of separating is just that we do have a lot of people who prefer the single-page document, but also the single-page document performance is not good at times. And it's bordering on unusable. And making it 50% bigger makes that even worse. But we still like the single-page document. So splitting the document has that benefit. + +MF: Going to the next slide. So the other group, as of the time that as of the previous editor group, as of a couple of weeks ago, has this strong preference for having a separate standard and a separate maintainers group as I showed in two slides ago. We've had conversations with the new editor's group yesterday quite a bit and it seems like we're all on the same page with that same opinion. So without even having to consider that second question, the first question implies that we would have separate documents. We don't think that that is a strong argument on its own. In fact, it's more of an argument in the other direction. But we think that it would be really beneficial to maintain these experts as an official role within this committee. Does anybody have questions about that? + +PFC: I don't know if you wanted to talk about the future idea of putting more things into other specs? + +MF: Oh, yeah, sure. I'll briefly touch on that. So yeah, when we were discussing this on the reflector, LGH brought up that regular expressions have many of the same aspects. You can make the exact same argument that we make for Temporal about regular expressions. I think all of those same arguments apply. And that makes a really good, strong case for separating regular expressions out of 262. I would love to pursue that. In fact, this is not a new thing. I've been thinking about that for a long time. And many of us have. RBN has also because he has been very heavily involved in evolving regular expressions over time. It would be great to have a group of experts in regular expressions who can work on nitty-gritty details of it and once they've got a really good design worked out, present that work to TG1. It's also really good to have that separated out as a separate spec. There are other standards that refer to our regular expression section and if nothing else, it would be great to have them just refer to the regular expression standard that ECMAScript uses. So instead of that actually being an argument against having temporal be a separate TG and a separate standard, I think that that's actually a strong argument for just both of those things being separated out. + +PFC: Okay. Next I wanted to go to the queue. But I thought since the topic of the form in which to publish it is probably a lot of people might want to have something to say about that, I thought maybe let's discuss any topics related to the substance of the proposal first. + +CDA: There is nothing on the queue thus far about the substance of the proposal. + +PFC: All right. Well, then let's go to this topic. + +JHD: Let me go back two slides. One more. Cool. Okay. So in general, I think that separating things is the wrong direction. The document is perfectly performant in Safari; I'm sorry that there's other browsers in which that's not true. Find-in-page is invaluable. And the HTML spec is hugely confusing that there's more than one document for it, more than one website for it. It's a bunch of scattered specs all over the place. What is the web? It's like 58 different documents. That's absurd. You mentioned regular expressions. There's tons of parts of the spec that have a small set of readers and that need distinct expertise, typed arrays, atomics, regular expressions, Unicode, etc. Separately, the editor group has never been the sole caretakers of 262. It is always all of our responsibilities to maintain the spec. That includes a 402 as well. The point of a separate group is to be more efficient with our time and have the people who care the most and have the most expertise be able to talk about that without dominating the time of everyone else. But that doesn't mean it's any less of a shared responsibility. There's already massive editorial consistency problems between the Intl spec and 262. That wouldn't have happened were it in the same document. + +JHD: The only reason that I wouldn't that I haven't been pushing for Intl to merge into 262 for many years is because Intl isn't normatively required. So you can have an engine that implements only 262 and not Intl and have it be compliant. If that changes, if we want to change that, then I would suggest then all be in one document. And then as far as referencing subsets, that's what section anchors are for. That's really a minor non-problem. Can you skip to the next slide as well? Thank you. I think as you've mentioned these arguments on this slide in favor of combining shared AOs are a huge issue. It's a pain in the ass when you're doing a proposal to reference multiple biblios and multiple proposals. And that would worsen this problem. + +JHD: I guess we have the multi-page version for anyone who's concerned about load time, so I think that's also a solved problem. Yeah. I think that we should be combining it. And separately, I also think that the structure of the documents has zero bearing on the expertise groups that manage it. I think it's a great idea to have a Regex group and a Temporal group and a typed array group and so on. We should be creating small focus groups for anything that isn't universally interesting to the room? That's my rant. + +LVU: I just wanted to reply. You mentioned the HTML spec, but that actually has both views. It has a full-page view and a multi-page view. So if you want to use the single-page view, you can. + +JHD: But there are specifications that aren't contained in that single-page view. I believe. Like CSS. + +LVU: Yeah. URL and fetch. + +JHD: Right. I'm saying there's no single document that describes the web. And I think that's a huge problem. But I'm not in that arena, so I'm not trying to explain about it. + +LVU: Okay. So. But the argument still remains that you could have a single view of everything exactly how you're advocating it should be. And also have multi-page views for people who find that more convenient. You don't have to pick that as tooling. + +JHD: True. And we have that now already. With the 262, we have that already for a single page and multi-page view. So again, it's just like you said, it's just tooling. The document structure isn't dictated by that desire. So you're right. Some of those arguments that I made don't have any bearing on combining or not. It's just a bearing on what tooling would need to be built. But it's much easier to build the multi-page view, I think, from one than to smush a bunch of documents together especially when we're talking about maintaining editorial consistency. And having shared infrastructure. + +LVU: And there's also the question of which view is the canonical one. + +JHD: Right. Exactly. + +PKA: LVU asked my question. Just can we have separate specs, but then just have a superview somewhere? Seems doable. Not a blocking concern for this. + +CDA: Can you go back to the slide about the separate spec TG, etc.? + +PFC: This one? + +CDA: Yes. So I just want to push back against some of these arguments in favor of separating because I think they're not accurate in some ways. So the idea that having a dedicated TG ensures that we maintain experts I just don't think that's true. Having a TG doesn't guarantee anything. It's really just an abstraction layer over some subset of the committee. We have groups operating as essentially de facto TGs for years. The Temporal champions group, module harmony, stuff like that. So the TG is just formalizing something that can already be happening to begin with. And nothing about that ensures that experts are maintained. + +NRO: Yeah. It was mentioned that other specs might want to reference just Temporal bits and wouldn't care about the rest of JavaScript. Do we know if anyone actually wants to? Because a large part of the Temporal spec is still very much JavaScript-specific. There are all of these AOs that deal with actually using the data. But then there is the whole API layer on top of that that is still there. And it's like a separate spec would likely not want to just actually reference parts of Temporal but do it their own version referring to only some of the actual internal parts and not the whole Temporal API. + +MF: Assuming that was a question directed at me. I am not actually aware of another spec wanting to make normative reference to Temporal. You are correct that the other things that I've seen that are based on Temporal are following the general layout but not actually normatively referencing Temporal for those because of JavaScript specifics. Unlike the Regex stuff where JavaScript is actually normatively referenced. + +RGN: I don't know if EAO is on the call. But there's current consideration in Unicode MessageFormat which might want to reference Temporal. It's an area of active discussion if they're going to do that or instead reference RFC 9557 directly. But the possibility at least exists. + +CDA: EAO is on the call. Did you have thoughts? + +EAO: The specific issue here is that RFC 9557 does not provide a well-defined expression for a time string. And the ones that are in RFC 3339 are not the time expressions that we're looking for either. So the only place currently that I've been able to identify that has an expression of time as understood by Temporal that I can find is, in fact, the Temporal specification itself. Also noting that this has led me to identifying a small issue in the actual Temporal spec. I'm on the queue for that one later. + +CDA: All right. I'm next on the queue. I also wanted to push back against this one about most of TG1 doesn't care to discuss temporal details. Committee members are more or less interested in various proposals to varying degrees. So I don't think it sets a great precedent to say for some category of proposals, these could be in their own spec. And where does that end? Do we poll the committee for every proposal saying, "If 51% don't care about it, now it needs to be in its own spec"? That seems strange to me. + +NRO: Yeah. Also for cases where a lot of the committee doesn't care about details, we have this specific calls for RS of JavaScript have the modules calls, the numerics calls. And for example, for modules, 90% of the discussions we have there are then very shortly summarized and brought to the committee exactly to avoid wasting the committee's time on details that we can figure out among the experts of the area. And if we will still have a group of experts around Temporal, I would hope that something like the current Temporal champions call just gets converted into something like that. + +MF: Yeah. I guess I should have covered this a little better because JHD had a very similar point here about there are lots of areas of the spec that require a certain expertise that only a few members have and yes, that's true. There are sections of the spec that have varying levels of requiring these specializations in the topics. And also, they vary in size. And I think that you have to draw the line somewhere. And for things that are very large and very specialized, they're clearly on one side. For things that are either not large or not specialized, they're clearly on the other side. But there's this whole gray area of, "Yes, you could decide to separate them out. You could decide to include them." That's just a vibes thing. Oh, and what I was going to say? Oh, so but for Intl in TG2, I think it clearly works very well that they are able to work through these details that require specialist knowledge without having to take a bunch of uninformed input from the committee. And NRO points out that, yes, you can have an informal group that kind of does that same thing and a TG is just a way to formalize that. + +SFC: Yeah. Just to +1 what others have said here. I think that what we currently do with Intl and what we currently do with modules and what we currently do with TG3 and some of the security reviews and various other things where we already have breakout groups those groups work those groups typically meet on a weekly or biweekly cadence. And then work through a lot of these details throughout the whole year. And then come to this group after having synthesized all the trade-offs. Maybe and then asking this group like, "Help us make a decision about these certain aspects of a proposal that might impact JavaScript as a whole," right? This group sort of is the central plenary, which I think serves its role very well. And I think that yeah, I think that this is how Temporal has already been working. We have this biweekly meeting and I think that that's what we should continue to do. When there's Temporal issues that come up, whether or not the Temporal thing is moved into a TG or whether it stays as a standalone meeting, that happens whenever we need to discuss Temporal issues as they keep arising. But then we come to this group to then have any proposal advancements I think that that structure makes sense that we should keep it. + +CDA: Philip has an item on the queue, but it appeared to disappear from my screen, so I re-added it. + +PFC: I can definitely see both sides of this argument. The example I brought up before, I'm sure most of the committee is not that concerned with fixing a wrong assumption around a daylight savings edge case from 107 years ago. And that thing I don't want to speak for anybody in the committee, but if something like that comes up, I'm sure TG1 is happy to delegate that to a small group and have it be fixed. The other side of that is that I do also think that TG1 having the sort of ultimate say over the contents of the spec does kind of obligate us all to at least get a passing familiarity with the areas of it that aren't necessarily part of our particular area of expertise. So I think this kind of goes both ways. + +CDA: I just had a reply for just a minor point. On this slide, you kind of coupled having standard be separate and having a dedicated TG and a dedicated editor group, which is fine. But it does not necessarily need to be that way. You could do any combination of those things. If the committee wants to do that. I'm next on the queue again. This was just for clarification. Because it sounds like so previously, the last time I checked, there was not interest from the Temporal champions to have a TG and to have and to step up for dedicated editors for Temporal. But by these slides, it seems like that position may have changed. + +PFC: I can reply to that. So I don't think the position has really changed. My position, which seems to align with the rest of the champions, but they can put themselves on the queue if not, is: for me, I've been performing this role informally for six years. And we're hopefully getting this over the finish line. And I'm happy to help other people transition into that role. But I think most of the current Temporal champions group are excited to stick the landing and move on to other things within the committee. So there's that. The idea of having a separate TG I mean, I've gone kind of back and forth on that. But as of certainly as of our discussions yesterday, I think it's especially if the editors are thinking of it as part of a roadmap to split out more parts of the spec, certainly fine with me as long as it is not a blocking concern for stage four and as long as I don't have to be the convener. + +SFC: So I wanted to raise the question of if we wanted to have a separate TG, that sounds like a lot of process weight. Because they have to have a chair and an editor and all that. I observe that the overlap of interest in Temporal in TG2 is higher than the percentage of interest in TG1. There's definitely folks in TG2 that are not interested in Temporal. But the percentage of people who attend TG2 who are interested in Temporal is a higher percentage than the percentage of people who attend TG1 who are interested in Temporal. Which could suggest that TG2 might be a natural landing spot if we wanted to go in this direction. TG2 already manages the ECMA 402 proposal ECMA 402 specification. If there were a ECMA some other number I don't know what number we're at now. I think we're are we still in the 400s, SHN? SHN stepped out of the room. 426. Okay. So maybe this would be like 435 or something. So if we were to introduce an ECMA 435, then I feel like TG2 could possibly be a group that could field questions to it. But then we would, of course, still stick with the same process of coming back to TG1 to get formal approval on anything. I haven't this is only an idea. I haven't asked TG2 how they feel about this. There might be similar pushback from people in TG2 who are like, "I don't want to worry about date arithmetic issues." Those are not internationalization problems. Although a lot of them actually are, as we found. So I just wanted to throw that out there. + +ACE: So I mean, my preference is not a separate TG. So for this proposal, I think I've noticed I've never been able to weigh in on anything about the date time aspect of Temporal. It's just I'm nowhere close to a subject matter expert in that domain. However, I have frequently found myself having a lot of interest and attention and having opinions about just the general programming aspect of it. Does it take an option bag? Does it validate which things throw versus this? The naming of things. That I've really found very interesting in Temporal. And I would not like to see the splitting things up. I can see all those little minor aspects of the language, no language is perfectly consistent and JavaScript isn't perfectly consistent. But I think we're doing a good job of trying to be more consistent with our designs. And if we start splitting things up, I can imagine those little things being missed. Because sometimes a proposal will be on the agenda and I'll think, "Oh, it's really not something I know about. I'll skip over it." But then when it's presented, I'll suddenly go, "Oh, hang on a second. That seems different to what I expected." Again, in terms of just the API shape or the validation. So yeah, I think we shouldn't think of this just because it's a very particular domain; therefore, it should be split off. Because at the fundamental, it's a language and the language should be consistent. + +EAO: I'm active on TG2. I would look to avoid the meetings in which we will be discussing so much Temporal. I would really prefer that it not to be added to TG2. + +JWS: Yeah. I agree with what PFC said earlier, where I personally don't think it should be a separate TG. There's plenty of champions who are still here who are not going anywhere. And they can always refer to one of the champions for reviews or helping push forward any new Temporal-related proposals or any spec books. I don't think there would be a high amount of maintenance or work going forward with Temporal. I mean, we talked about this last night. There were one or two things in the V2 repo that were all pretty well scoped. That people could pick up. But I think all of us, as in the spec champions, would prefer to see those things being picked up and discussed in the main plenary, the main TG1, and not a separate track. But in the effort of trying to move things forward, my question was going to be, if this is something that we could revisit if people feel that we might need a separate TG? Can we try out not having one right now because we don't even have any dedicated spec champions or anyone to run this group? And we may not even need it. And is this something that down the line, we can try and see if we can work without a separate TG? And if we really feel in six months or a year's time, it is needed because maybe there is a high amount of spec bugs or there is a high amount of proposals coming through for Temporal, which I don't think there will be. Is this something we can revisit then but still keep the other things maybe a separate document? So that's just a question. + +API: Just echoing what JWS was saying. JWS nailed it. I feel the same way. I feel like we just need to land it. Temporal is huge. But so much work has gone into it that I really don't see there being a lot of ongoing churn within it. So we just need to land it, and then we'll see how things fall out, right? And if there turns out to be a lot of time being spent where people in plenary feel like it could be better time could be better spent off in a more focused working group, we can discuss that later. But I think right now, it just makes sense to keep it all together. And that's orthogonal from how the documents are presented, right? You could still keep it as a separate document if you wanted just for organizational purposes. But I think we're good to go. + +CDA: Yeah. I think my comment is in a similar vein, which is that I don't think we can take any of these decisions today anyway. I think it's good that we're talking about it. I think we should wrestle with this a bit. But I don't think we can actually today decide whether it's a separate standard whether we're convening a new TG or whether we have one who assigned a dedicated editor's group. That would have to be a follow-on at next plenary, I think, at the earliest. EAO. Oh, sorry, PFC. Did you want to speak to that? + +PFC: I wanted to suggest that since we started on the big publishing topic, maybe we should go to MF’s topic and then EAO's? + +MF: So I'm hearing the feedback here. I'm getting the general feel of the room is the desire to, at least for now, try to remain combined. I just wanted to summarize what that would mean for us as editors. Again, we brought this to the group because we felt there was risk in our ability to maintain the document going forward. I do totally respect that you think that there's not going to be very much change. And that'd be great if there's not a lot of normative churn in that or a lot of spec bugs found. That would help reduce this risk for us. The risk we mostly saw was that performing spec maintenance where we do major refactorings, where we have to touch a lot of things, this is kind of a—it's like maintaining a large code base where you just kind of don't know what a section of that code base does or you don't know the language it's written in. It's kind of hard for you to really understand and make confident changes in that. That's kind of how we felt as an editor group. And in our talks yesterday, the remaining editor group kind of felt that same way. I know we have RGN now who can be our expert there. But we may not always have that. And we want to always be confident in the same way that—I would not have volunteered as editor of 402 because I just don't have the relevant subject matter expertise, even if I have the expertise around actually maintaining a spec document. I feel like a lot of that is kind of being forced on me now in that I have to either try to get that subject matter expertise in Temporal at least to a point where I can maintain it or I just can't do the kinds of editorial things that I need or want to do as part of 262 editor. So that's our summary. I'm totally happy with following the group's will here and trying that out. And we can always delay this decision to later. That's fine. I just wanted to let you know that that's what the consequences are of that. + +CDA: Yeah. Thank you. That’s very helpful context, of course. DLM. + +DLM: I'm also in favor of keeping Temporal part of the main specification. I just wanted to ask MF a question about what he just said. And that's if the main specification is being refactored regularly, if Temporal were a separate specification, is there a risk of drift between the two specifications because people are paying attention to Temporal if it's separate? + +MF: Not totally sure what you mean by drift. If you're talking about editorial inconsistency, yes, totally. In the same way that there's significant editorial inconsistency between 402 and 262 today. That can happen. I think that's not a terrible outcome. + +DLM: Okay. Yeah. That was my question. Thanks. + +NRO: Just about the problem with most of the editors knowing nothing about Temporal. Now that this new editor group is larger, I wonder whether some of us could actually be assigned to focus on this. I have PFC virtually sitting next to me all the time. And so PFC could help me when I need help with learning this. Or think about LGH that can just walk to JWS's desk and ask things to him. So maybe we can try to explicitly grow expertise here to make the editors' work easier. + +PFC: I also just wanted to reiterate that I have no plans to disappear and as far as I'm aware, none of the other current Temporal champions do. So yeah, like I said, it's not a question of we want to throw it over the wall and wash our hands of it. We're certainly available to answer questions and try to help bring in whatever expertise is needed. + +CDA: JKP’s on the queue with plus one to NRO. I like the idea of trying to keep SME editors at all times. I just had a question on the de facto is that this is going to land in 262. So the committee has agreed to do that basically at stage 3. So unless we decide to do something different, that's where it's going. So my question is if after that happens, presuming that is what happens, and then for some reason, as a committee, we go oh, hang on. We actually don't like this. And we do want to split it off. Is that painful for in some way? Is there a cost of not doing it upfront that? + +MF: No more so than the typical repo splitting kind of thing of moving issues, trying to figure out how to split history, that kind of stuff. Which is not fun. + +CDA: Well, you mentioned the repo. But previously, we had talked about even if it was a separate spec, it could potentially be in the same repo. + +MF: Yeah. Only if we actually go full separate editor groups would we really want to make sure that we split repo and stuff. + +DLM: Just a quick reply to what MF said earlier. He said that editorial inconsistencies weren't necessarily that big of a problem as an implementer. I'd prefer not to have those sorts of inconsistencies. + +JHD: And the same is true as practitioners, right? I mean, having to learn a foreign language, essentially, to understand a spec is hard enough. Having to learn two is even harder with 402. And having a third dialect will not make understanding the spec any easier for anyone. So I think it is actually a really big problem to have editorial inconsistencies. + +CDA: I was going to reply while RGN’s finding a microphone. Yeah. I think we have seen that as well with some conventions in TG2 that didn't quite match up with TG1. And that's not been a great source of. + +RGN: That's true. But I don't want it to be overexaggerated. The editorial inconsistencies we're talking about are things like whether you phrase something in one step or two steps, whether you use words like else or otherwise. I mean, yes, we value consistency. We strive for perfection. But it's not like you have to learn an entirely new dialect to read ECMA 402. + +JHD: That's true. And I didn't mean to imply that you all had not done a great job keeping up with the refactors that have happened in 262 over the last decade. But there have been a lot of them. And it would have been easy for you all to fall behind. And I think that the more documents we have, the easier that will be in the future. + +EAO: In the context of working on Unicode MessageFormat and there wanting to find an expression for a time string that we ought to be able to support in that standard, I ended up kind of discovering by accident that I think the time string expressions that we are referring to in `Temporal.PlainTime` specifically are maybe a bit under-specified. I don't know if that's the term. But still, the situation is that, for example, if you look at the MDN documentation for plain time, it's referring to an RFC 9557 expression of a time string. And then it kind of details that. But this isn't actually something that I can find in 9557. And it's not clear that there's any specific one in RFC 3339 where this is coming from either. I mean, it is ultimately really quite well specified in the Temporal spec itself for what it is that's acceptable to pass like this. But then the odd weird thing for which I filed an issue last week that when I noticed this is that Temporal's PlainTime is the only place in Temporal where we are parsing a string that does not contain a date part, only contains a time part. And when we are passing such a string, we do accept it to have a valid offset. And it’s possible annotations added at the end of the string. So effectively, a full date time thing just cut everything only consider stuff that's after the T separating the date and the time. And then we discard that information. And I found it very surprising to be able to parse a time with an offset and this is completely valid if the offset is valid. But then the offset is completely ignored. And that this is the only place in the spec where we are parsing a time string and I did not find previous discussion or consideration during the work on Temporal for why this is a thing. It seems to me from the outside it looks like this is an accidental work that's kind of left in here. But I just wanted to highlight that, hey, there's this weird thing that probably doesn’t really matter. I'm not going to be blocking Temporal or anything. I just wanted to highlight that there's a weird thing here. And if anyone does have a concern about this, this is the last time to do anything about it. But yeah, I found this to be surprising behavior. + +SFC: My response was in all the different Temporal types, the decision that was made a while ago was to pull out the fields that are required for that type and ignore the other ones as long as they're syntactically valid. So for example, if you give a date time string and give it to a PlainDate, then you get the date fields. But you don't get the time fields. It ignores the time fields in the string. But it's still looks for syntactic validity. That always is the case. So this seems consistent with so the behavior that EAO just described seems consistent with that. And I'll let Philip address the thing about the time syntax. + +PFC: So we're talking here about strings with a time and a UTC offset. So something like `12:34+01:00` where if you feed that to `Temporal.PlainTime.from()`, you just get a PlainTime of 12:34. The UTC offset is ignored. Yeah. So like SFC said, that's been a design principle since we started designing what string formats Temporal would accept. It's not accidental that the a time-only string plus UTC offset is accepted. That is a format that has its own it's specifically called out in its own section in the ISO 8601 standard which, to my infinite chagrin, you cannot access without paying. Sorry. But yeah, it is a format defined in a standard. And we had no good reason to exclude it. + +NRO: Just to clarify, it's not just the Temporal PlainTime you can pass a time with a timezone offset. You can also pass a date and a time and a date will be ignored. So it's not just about the timezone offset. It's all of these things. + +CDA: That is it for the queue. We have oh, hang on. There's a reply from SFC. + +SFC: Yeah. Just to note that the exact grammars are kind of spread out between ISO 8601, 9557, and was it 3338? 3339. But Temporal also reproduces them in a grammar that is directly embedded in the Temporal spec. And each one of these individual specifications that we cite discovers a different piece of the grammar. But then we reproduce the parts that we actually use because, for example, ISO 8601, gives sort of goalposts for what a grammar should be without actually writing down a grammar without actually writing down the context-free grammar rules. And then Temporal actually writes down the context-free grammar fully. So yeah, that's all. + +CDA: All right. That is it for the queue. We have just over 15 minutes left in the time box. + +PFC: All right. There's no other comments. I think it's time to move on to the request for consensus for stage 4 for Temporal. + +CDA: Do we have support for stage 4? I support stage four. JKP also says, "Strong support. Thank you all for years of amazing work." DLM, enthusiastically support. API, plus 1,000, thank you, everyone. I also have enthusiastic support. I just want to elaborate on that as well. Mine seems lackluster in comparison now. MF, plus 1, stage four. DJM, plus 1. LVU, plus 1,000. Yay. Celebration emoji. JHD, plus 1. CZW, support. Node.js plan support of Temporal in v26 in April. Oh, did you want to speak? Or is that we're good? Okay. JSL, plus 1. Ruben, definitely support. NRO on the queue. + +NRO: Yeah. Should they already commit to discussing, let's say, March 2027 because of having it in a single spec to see if at the point we want to we see, "Okay. Actually, there is nothing happening for Temporal. So it's fine." Or whether at that point we want to decide to split. Just to make sure, even if everything is fine, we somebody's responsible for coming back and say, "Actually, everything was fine." + +CDA: Sure. Do you want to take that action? + +NRO: I'll put it on my calendar, hopefully that works. + +CDA: Perfect. We can put it on the TC39 calendar. RPR, with plus 1, after eight years, this is a great day. I think it's more than eight years, RPR. ZB, plus 1, proud to witness. Do we have any objections to stage four? Any dissenting opinions, people that really don't like Temporal, really love Date object instead? + +PFC: Well, if you put it like that. + +CDA: Do we have any Date stans in the room or on the call? Hearing nothing, seeing nothing. I believe you have stage 4. Speech! + +PFC: Yeah. Thanks for the overwhelming support and the plus 1,000s. And I just want to take a moment to highlight all the people who have been in the champions group as well. It's really been a pleasure working on this with you all. And yeah, it's been a great journey. And I'm glad that we're here. And well, like I said, we're not disappearing. So we'll pick up the odd thing from time to time, I think. I had another slide in case there was something else coming out of the editor discussion that we wanted to request consensus on. But I don't believe there is. At this moment. But we will follow up with this, I guess. + +CDA: Yeah. Sorry. I don't have the queue open. So I don't know if anybody's saying anything. But I feel like I would object to anything simply on the basis of it not being explicitly called out on the agenda. So whether that's the new TG, whether that's splitting off into a spec, a separate spec, whether it's assigning a dedicated editor's group, etc. I think all of those things require more explicit and advanced notice. + +PFC: All right. Then just to close it up, I wanted to share some statistics with you all because I thought it would be good to close on a lighthearted note. And maybe it says something about me that I think statistics are lighthearted. We got 12,000 lines of spec text in ECMA 262 for Temporal. In ECMA 402, we are adding 1,600 lines. I took a look at some of the history in the proposal repo we've merged almost 2,000 pull requests over the years. We've had almost 200 champions meetings. I did this by counting the number of ones that were in the minutes. So there's probably some others from the very beginning that weren’t in there because I think the minutes document started 2018. I made a total word count of champions meeting minutes, and that's over 200,000 words. And just for scale, that is two to three average-sized novels and then this last one, you can calculate using Temporal how many years has it been since Maggie's kickoff blog post, which happened on April 9th, 2017. And you can calculate until the `Temporal.PlainDate` of today. And we want a nice human-readable figure. So we'll choose largestUnit years and smallestUnit months. And we'll format that in my locale. And you get eight years, 11 months. So it's been quite a road. Thank you all. And I'll give us five minutes back. + +CDA: So I'm on the queue with Temporal getting to stage 1 on March 23rd of that year. + +PFC: Oh, stage one. Okay. + +CDA: Who here was on this committee in March of 2017? How many do we have? Less than 10. We have five, maybe six? + +AKI: I joined two meetings later when it hit stage two. + +CDA: A couple. I know there's a couple on the call. WH was here. + +WH: I was on the committee. + +CDA: Do we have an attendees list on this thing? We do. WH was at that meeting. MF was at that meeting. DE, SYG, MLS. I'm skipping over the people that are no longer in the committee. MM was at that meeting. LEO, ZB, and BE was at that meeting. Istvan was at that meeting. All right. Sorry to hijack your well, I guess you were done. So I'm hijacking. + +PFC: Yeah. Yeah. Done. + +CDA: Okay. Well, I don't think we have another topic for the next that we can slot in for the next eight minutes before the break. Do any of the Temporal folk want to say a few words or anything? Not to put you on the spot. I see JGT has arrived just in time for his speech. + +SHN: Can I just say a few words? I didn't put my JGT, did you want to say something? First, you need a mic. You need a mic. + +JGT: I just want to say thank you to everybody. I mean, I'm working on this for maybe five years or so. Maybe a little longer, maybe six. And I've been in the tech industry for a few decades. I've benefited a lot from the tech industry. And this was my opportunity to kind of give back to a group of people and a collective industry that's taking care of me and taking care of my family. The topic that we're working on is something that matters a lot to me. And it's just gratifying to be able to have all of you and especially Philip and the other champions give me the opportunity to do this. And I was talking about it with Chris last night. And I realized that this thing is probably the biggest impact to the world of anything that I will ever work on in my career. And I did it completely for free because I was interested. And it was fun. And I think that's really the beauty of open source, right, that all of us can do these things that affect so many people. And we're not really doing it to get rich. We're not really doing it to get anything special. And I just think that's great. And I'm so pleased that we've all had the opportunity to do this. So that's it. + +SHN: I just wanted to congratulate everybody at the committee, the champions, the dedication, the persistence that you've had over the last eight years and 11 months. I thought it was 10. But 8 and 10, that's not far off. You've got an epic novel. I need to think of a title for your epic novel as you said. I love your statistics. I love your statistics. It really puts some perspective on what's happened over the last years. You could go back to the slide because I don't know if everybody saw it. And that's really quite impressive. Everything has 1,000 next to it almost. So congratulations. That's all I want to say. Congratulations. It's important for you. It's important for the committee. It's a huge milestone for ECMA. And we thank you for your work. + +AKI: SHN and I have been discussing this for weeks now, how we were pretty thrilled about this hitting stage four. And we wanted to just make sure that we set our congratulations. Thanks, SHN. + +SFC: Yeah. I guess I believe my first meeting of TC39 when July 2018 was perhaps the one where a Temporal reached stage two. And that was in Seattle. And MPT and the others were there. And I'm really happy that I was able to be involved with the champions group. I'm very happy that this committee was receptive to the idea of supporting non-Gregorian calendars in Temporal. I know that this has always been very important for the Intl working group. And I think that we've really done a lot to raise the bar across the industry for how we do dates the right way. I think that when other programming languages, other standard libraries of other independent projects have added dates and times to their language, to their programming languages, every time it's done, it's gotten better. And I think that I've already seen Temporal be the basis for datetime libraries in Rust and in other platforms. So I really do think that we're moving the bar not only in JavaScript but also across the whole programming industry. So this has been a really this has been a really rewarding time that I've been able to work with this committee on the Temporal proposal. And I'm really excited at this watershed moment to see that it's finally reaching stage four. It's a very big milestone. + +JWS: Yeah. I won't repeat everything that's already been said. Yeah. I echo all of that. Yeah. I've been on Temporal since I think 2018. Yeah. It's been great working with everybody on it. I've learned so much about dates and times more than I would probably ever want to know. But no, it's been great. I just wanted to add thanks to the temporal_rs devs Manish as well, who's not I don't think he's here. But also Kevin Ness, who is on the call. He was able to join. And Jose Espina, they have been really enthusiastic about Temporal. And they have been working on it for years now. They are not part of a large organization. They are just open source contributors who are just very enthusiastic about the proposal. And they've managed to help us get a full implementation from what PFC showed earlier for V8 and Boa. So I just wanted to give them a shout-out too. + +PFC: I'm actually struck by the number of people that we got from the community showing up to help. So Justin was one. Adam is another who just decided to write a Polyfill and showed up with feedback that was really helpful. And this has happened over and over again over the years that I've been working on this. So I think this really shows that the topic is kind of alive in the community, which is yeah. I'm really thankful for all that help. And yeah. Glad that we could do this. + +CDA: Yeah. I think it's fair to mention ABL as well. + +PFC: For sure. Yeah. I always forget that ABL doesn't work for Mozilla. He's somebody who does Mozilla stuff because he loves to. Definitely gets a shout-out. + +CDA: Thanks, everybody. I'll do one last thing to mention. James, I think, had opened an issue in the node repo for Temporal and shipping that node. So maybe we can get the loop closed on that. + +JSL: Node 26 should be shipping Temporal. + +CDA: All right. That brings us to our break. We'll see everyone back here at the 45-minute mark. Thank you. + +### Speaker's Summary of Key Points + +* Temporal has several implementations in use already, including two in major browsers which both pass >99.5% of the ~5k conformance tests. +* The conformance tests in test262 are extensive, and we used a snapshot testing technique to find implementation divergences in order to build more test cases. +* We won't block stage 4 on web platform integration, but delegates are encouraged to think of use cases for any web platform integration problems they'd like to see solved. +* We discussed putting Temporal in a separate specification document and/or chartering a separate TG to maintain it. The committee gave feedback on this idea which was generally not in support of taking either of these actions. +* Based on the kinds of ideas which have been raised as future improvements for Temporal, we expect any follow-on proposals to be small and self-contained, and there are no plans to champion any at this time. +* The ECMA-262 editors remain concerned about safeguarding access to subject matter experts on topics such as Temporal, but also other existing parts of the spec such as the RegExp grammar. + +### Conclusion + +* Temporal advances to stage 4. +* At this time, Temporal will not be put into a separate specification document. +* At this time, no separate TG for Temporal topics will be created. We will follow up within 1 year to see if current assumptions still hold. + +## Import Text for stage 2.7 or 3 + +Presenter: Eemeli Aro (EAO) + +* [proposal](https://github.com/tc39/proposal-import-text) +* [slides](https://docs.google.com/presentation/d/1mbkpHjqihHBx6rp07Snb8uB8XevJk7jz2xPSyQppSjE/edit?usp=sharing) + +EAO: This is import text for which I'll be looking to seek stage advancement if possible. The very short review of what this is is about adding a new text value for the type import attribute and if you do that, then you end up importing a file as a string value. And this got to stage two at the last November plenary, and it's good to note that this is not just an ECMA 262 proposal. To affect the changes as a whole, a lot of the spec text for this actually ends up being in the HTML standard and there are small pieces that also go into the Fetch standard as well as into the Content Security Policy definitions. And I've submitted PRs for all of those. And have gotten some feedback from those. That's been taken into account into those specifications parts of this. + +EAO: And there's also testing available for import text in TC39 test262, but also in the Web Platform Tests there's a test suite PR for import text. And effectively, this is already available unconditionally in Bun. And in Deno, you get support for this when you run it with `--unstable-raw-imports`. I'm not aware of a browser implementation yet for this, but would be very happy to be corrected on that. + +EAO: This did not get 2.7 at the last time when I presented this, mostly because we ended up discussing quite a bit the concerns that SFC raised about representing non-textual data through this API. And so after some work, there is now an open PR that would be affecting a change that SFC at least has indicated previously that he would be happy with as a change to the TC39 parts of this spec to have it go forward and become a thing. + +EAO: The proposal as a whole is adding this new CreateTextModule abstract operation and the proposed change to the proposal here is to add the highlighted specifier here that the input source is a string value that should represent textual data. Which is hopefully sufficiently clearly crafted to kind of match what is required to satisfy everyone around us. So specifically, the term “should” has no normative meaning in our specification, as indicated by other examples of “should” that I found in the spec. + +EAO: As opposed to the expression “is expected to,” for instance, that seems to be correlating in the spec much more with expectations that cause an error to be emitted if its expectations are not fulfilled. And then also the phrase “represent textual data” is something we already have when we are defining what a string is. The quote here is literally from the existing spec that the string type is generally used to represent textual data in a running ECMAScript program. And so I would like us to be able to merge this PR and I'm seeking support from people on the call for this. + +NRO: Yeah. I reviewed this PR. I am personally fine with it being merged. Because I don't think it causes any problems. I, however, also don't think it's really useful for its goal. The original problem was that we wanted a way to tell hosts to not use the wrong encoding algorithm when encoding whatever source they have for the string that will give it to us. And it's really not something that our spec has any say on because we do not deal with the actual way that the data is stored before giving it into ECMA 262. So a proper spec change that would move towards this goal would need to happen at different spec. I find this text here meaningless mostly because as you already said, the definition of a string type already says it represents textual data. I believe our definition of textual data is just literally any text, literally any sequence of code points. But yeah, if it's what we need, it's not harmful. + +SFC: Yeah. So this particular language emphasizes that we want the strings that the hosts give us to be of textual content. Which I think is important and which I think is important to show the provenance of those strings. In the repository, I use this language, good strings and bad strings. Where good strings are ones that contain text that contain Unicode text, Unicode code points. And bad strings are those that contain unpaired surrogates and other arbitrary binary data replacement characters and so forth. And I think that this language is a small step in the right direction. I would prefer something stronger. But also I think that it's not clear that there's a path to get something stronger in 262. I think that 262 saying that we want these strings to represent textual data is already good and being more clear about that is already a good step. So to address NRO’s point about it doesn't make a difference. I think that the existing definition of the string type says that it—can you go to the slide that shows the existing tech that the existing spec? It says that it is generally used to, right? Meaning that the spec explicitly says there are good strings and bad strings. It is generally used to represent textual data. But not necessarily. And then in the pull request, we are saying that we want the strings that the host gives us to be of the class of strings that are good strings. And I think that's what the impact is of this language. And it's a step in the right direction if it's something that we can agree on as a committee, then I think that that's progress. + +EAO: I don't see anything on the queue here. But JHD, you're the other spec reviewer for this. Would you be fine with PR #10 being merged for this JavaScript? + +RPR: I don't see JHD in the room. Empty desk. Empty chair. So I don't think you'll get a response to that, EAO. + +EAO: That is a little bit unfortunate. I would like to be asking for stage 2.7 after merging this. And I think it would be good to get an explicit agreement from the other spec reviewer for this. + +RPR: We're sending people out to hunt him down. It's probably in the bathroom. + +EAO: Thank you. I also don't know how it works, but given that we've previously agreed to have JHD as our stage 2.7 reviewer for this, do we actually need explicitly his support for this? Or can we ask somebody else to also be a stage 2.7 reviewer and advance based on their appreciation of this text? + +RPR: I think we're not under such time pressure that we need to make that call in the next five minutes because we could return to this later in the meeting. + +EAO: Okay. I mean, just noting once this is resolved, I'll be asking for stage 2.7 and immediately after that, asking for stage 3. So if anyone has any concerns about stage advancement or any issues around those, those might be discussed now. But I'd be very happy if the queue remained empty because nobody hopefully has any concerns about such a thing. + +RPR: The queue remains empty. SFC is on it. + +SFC: Yes. And it seems like we have time. I just wanted to get a temperature check from the committee on this question. I think this spec change helps us get toward that direction. But I think without this change, it's certainly conformant. With this change, it's maybe, maybe not conformant. Not entirely clear. On whether you can import, for example, a JPEG file or anything else, a WASM file, whatever you like, as text. With typed text, import typed text. As of the my understanding is that and EAO can correct me if I'm wrong that the current that the previous attempt or intent is that if you do try to import a WASM file or a JPEG file, as typed text, what you get back is a string that has that takes the binary content of the JPEG file pushes it through a UTF-8 decoder to which most of the string is replaced with replacement characters. And then you get a string. And that seems like behavior that ideally we wouldn't have in JavaScript, especially as we're moving more in the direction of being more strict with things, with various types, it seems like that's a case where if I tried to load a WASM file with import type text, I get an exception. Not a string with a bunch of replacement characters. + +EAO: Around this that I understand you to be describing is not actually in the JavaScript spec at all. It is it would be most directly in the HTML specification which currently would define the behavior of where the image or whatever that you describe would have most of its contents replaced by replacement characters effectively. So this is the and again, this is not a consideration directly for this group. It's a consideration for the HTML group. + +SFC: I disagree. I disagree with that. I believe that the HTML group doesn't care about strings representing textual content in JavaScript. They care about they care about cases for have you loaded the file that exists or the permissions correct? Are the paths correct? They have a lot of problems like that that they worry about. They worry about the integration with the DOM. They worry about a lot of different problems. What they don't care about is strings in JavaScript represent text. And ArrayBuffers represent binary data. They don't care about that distinction. That's a distinction that we care about in TC39. We care about you should load JPEG files as ArrayBuffers. You should load JSON files as strings. That's something that we care about. And that's why I think that the can you go back to the PR #10? Well, it's went away now. The spec change. Yeah, that one. Right? That's why I think that this language is in scope for 262. Because we were telling hosts don't give us bad strings. Give us good strings. That is definitely in scope for TC39. It's not in scope for HTML. + +EAO: This amount of language is entirely within the scope of this group, yes. But going beyond this, to define the behavior that, for example, the HTML standard will define, is not in the scope of this group's considerations. + +SFC: Yeah. My main concern with this pull request is that it doesn't really say it doesn’t really tell HTML clearly enough what we would like them to do. But again, it's a step in the right direction. And I believe JHD’s back in the room, so we could also go back to the question you were asking before. + +EAO: Excellent. + +RPR: Just before we get to that, I think there's a continuation from DLM. So let's go to Dan. + +DLM: Yeah, sure. Just briefly, I was wondering if you could comment on the current status of the HTML integration pull requests. + +EAO: It's there. It's had some discussion on it. It's currently, I think, mostly waiting for us to proceed. Overall, my understanding of the sentiments around this in the HTML group is relatively positive. And I've not heard any concerns from the Fetch or the CSP changes either. Mostly, it's just waiting for the JavaScript parts to advance a bit beyond the current stage 2 so that it becomes more appropriate for HTML to have a valid opinion about the whole thing. And also the very specific spec language, whether to go with what's in the PR at the moment or something a little bit different. + +EAO: JHD, You're one of the stage 2.7 reviewers here. And the question here is, it would be if we were able to merge PR #10, this would effectively unblock our stage advancements to 2.7 and beyond. And the thing that 2.7 adds is this clarification to the create text module AO that is added in this proposal specifically that the source string is described as a string value that should represent textual data. And here, specifically, the term “should” has no special normative meaning. And the phrase “represent textual data” is taken from our existing string description. So my ask for you, JHD, would be, would you be fine would you be would this be acceptable to you to merge this PR in advance? + +JHD: So yeah, definitely. And I just stamped the PR. The only comment I think I wanted to add on record, which has nothing to do with the 262 spec, it's that I wish that the web wasn’t going to be sending a different MIME type with import text and import bytes. I don't think it should be possible for a web server to choose to send different information whether you request it as JavaScript or as text or as bytes. I think it should be transparent so that you can only ever send the same thing. But I don't get a vote there. I just wanted to record that opinion. The 262 spec is great with this PR or without. And I'm happy with the PR. + +EAO: I would encourage you to ensure that that comment is recorded also as a comment on the HTML spec PR. + +JHD: I believe I have commented on, if not both, on one of those two proposals I mentioned on the HTML PRs. But that input seems to be ignored. + +GB: On this topic, the error for non-textual data I see that we specify in the host hook that there's an error for that it can return an error. But would it be worth specifying the type of throw completion or text when it's non-textual data? Like syntax error or type error or something like that to more specifically define what should happen in this case? + +EAO: I believe this is in the HTML standard. Because that's where the error would get thrown. + +GB: Right. But specifically, what error type would it be? + +EAO: I'm sorry. I don't remember that off the top of my head. I'm looking it up now. Wait. Sorry. What is the condition? Because I think, actually, in the HTML standard, there is no error for this. In the current PR for the HTML spec. + +GB: So am I understanding correctly that the wording change here does not have a corresponding HTML spec change to add a new error for any non-textual data types? + +EAO: Correct. The PR for the HTML spec currently is proposing to parse any input that is requested as text as UTF-8. And then to use replacement characters for any non-character parts of that. + +GB: Right. Well, I guess my suggestion would be to amend this change to also note what type of error should be thrown for non-textual data. And we can at least decide if it's a syntax error or something else. Just a suggestion. But it might be beneficial. Well, if it's a host-defined property, we could still define what the host should do in that case. + +RGN: Yeah. Question for JHD. You mentioned servers sending different MIME types. Can you explain what's going on with that? + +JHD: Yeah. So this type of question has come up before with dynamic import. And with that one, I think there was no way for the browser to mandate that a server send the same content for two requests that are in parallel. So if you kick off two dynamic imports for the same specifier, it's possible for a server to send two different modules down. And so we put wording in the spec that says don't do that. But there's nothing mandating it. But in this case, by and that's still the case here, right? If you kick off an import, two different dynamic imports, one of which is bytes and one of which is text at the same time, a server could similarly send two different responses just randomly. And there's nothing we can do to prevent that. If a server can distinguish between what import type you're using, then they can choose to send different stuff. For example, a server could deny you the ability to import JavaScript as text or bytes and say, "No, it's only as JavaScript. Only when the MIME types this and when the MIME types something else, I'm going to 404 or whatever." And I don't want that type of ambiguity to be possible. I want the server to not be able to tell the difference between a request for a JavaScript resource. + +RGN: Okay. So specifically, what you want then is for the HTML spec or possibly the Fetch spec to mandate the Accept header field value in the outbound request such that they are not distinguishable? + +JHD: Correct. I believe. + +RGN: Okay. + +JHD: Yeah. I want to deny the servers the ability to differentiate because there should be no reason for them to do so. + +RGN: Okay. But you're not talking about server behavior at all in that case. The HTML spec would have to clarify that the outbound request is indistinguishable. + +JHD: Right. Because then that obviates servers from being able to have different behavior. + +RGN: All right. For the clarification. I get it now. Thank you. + +MF: So I apologize. This is logged as a clarifying question. I had no way to reply to GB’s topic, which was also submitted as a clarifying question. So I'm going back in time now to try to respond to that. So what error are you talking about? Where is an error? + +EAO: That was effectively my answer. There is no error here. + +MF: But I think your answer was we don't know what error HTML is specifying. But there's no error anywhere. Where is an error? + +GB: There is an error specified for the host load imported module hook—let me pull up the spec here. So basically, if when returning text, when the host load request type is text, FinishLoadImporting module must be called with a completion record. That is either a completion record or a throw completion. I was under the understanding that this addition of creating a synthetic module record whose default export is source a string value that should represent textual data implied a case where non-textual data would error. Is that? + +MF: I see. Okay. So I'm just yeah. So you're talking about in `HostLoadImportedModule`, that step three that's in the spec where it says the throw completion. You're asking about that throw completion right there? + +GB: Yes. + +MF: Okay. And you're also saying that that throw completion you think is associated with the wording in CreateTextModule? + +GB: Well, I don't understand how else the wording could affect behavior. + +MF: Okay. So okay. Okay. Thank you. That was what I was wondering. + +SFC: So I haven't heard anyone pushing back against PR #10 on the screen. So if we assume that we're okay with PR #10, then I wanted to see if we're okay with maybe an even stricter version of PR #10 replacing the word should with must, for example. And perhaps also being a little bit more explicit about what we mean by textual data. Textual data could even go into modifying the existing string definition in 262. Which is saying, for example, a sequence of code points of human-readable syntax, which allows JSON, allows most other human-readable syntaxes. It allows regular human-readable strings. The intent here is to disallow WASM and JPEG files and explicitly binary non-human-readable syntaxes. This word this language, human-readable, is used in other programming languages to talk about what a textual string is supposed to mean. So I wanted to see about that. I see a reply from RGN. + +EAO: NRO, I think your comment is effectively a reply to this. + +NRO: Okay. Yeah. So we're talking about whether we should recommend HTML to throw and whatever. But HTML editors already made it clear that they do not want to throw here that since many years ago, the web platform decided to go in the direction of just assuming the text is UTF-8 and just coding things as if UTF-8. And giving garbage out if somebody's probably not using UTF-8. And new APIs have been designed like this. Fetch is designed like this. So HTML made it pretty clear that they do not want the extra restrictions we're discussing about. + +RGN: Also at our level. As I mentioned, I would push back strongly. In no small part because I don't want to fight over a normative definition of what it means to be human-readable. + +KM: Yeah. I mean, I would also push back on that. It's pretty ambiguous. I don't know what that means. And it's also yeah. What if I have some crazy programming language that looks like a JPEG, but actually is not but actually or something weird? You could argue maybe that's not human-readable, but maybe there's someone who can read that. I don't know. I don't know what it means. I don't want to get into the weeds of what human-readable means, I guess. To summarize that last sentence that I forgot. + +SFC: Yeah. Just to so push back a little bit on some of this, which is that the comments that we got from HTML, my understanding is that they were about when we were talking about, "Well, what if you're loading a file that's UTF-16 and it has a content type that says it's UTF-16 and you're still pushing it through a UTF-8 decoder? That's wrong." Yes, that's wrong. That's something that we have very little control over in TC39. I agree with everyone on that. I can respectfully disagree with W3C's choice to ignore content type headers. I think that that's a silly choice. And that's a problem we should bring up with HTML. I don't think there's a lot we can do in TC39. + +SFC: The question about human-readable versus binary strings is, like I was saying earlier, is something where I do see it to be in scope. And just if you do have your funny programming language that looks like a JPEG file, you won't be able to import it with import type text and have it actually be something sensible because it'll have replacement characters. Replacement characters are just a very bad way of us returning an error. We have better ways like exceptions. If you have to do a replacement character, you get an exception. That seems like a much cleaner approach that informs programmers early on that there's something that was going wrong. Rather than delaying this problem I think that there's a lot of cases where replacement characters make sense and are the right tool for the job. But when we have the ability to take and import and say, "This import is not a valid human-readable string," I think that is something where as TC39, we should do more and we can do more. To enforce this. + +RPR: That's the end of the queue. + +EAO: I think then what I'm going to do is, MF made an editorial suggestion to to the PR that I’ve now incorporated, to say now that “a string whose value should represent textual data”. And as I've not heard any opposition for this, I'm now merging the PR to now be a part of the proposal, and so I can advance to this slide. With that spec text, can I have stage 2.7? + +RPR: So we do have support. From JSL and NRO. Is there and from CDA. Is there any opposition? And support from MF? Is there any opposition to stage 2.7? No opposition, so congratulations. EAO, you have stage 2.7. And I will also note extra support on the queue from RGN and JHD. + +EAO: Excellent. And my next question is, can I have stage 3? + +RPR: That's right. Okay. I'm just clearing the queue. So that we're not confused about what is.. I apologize, oh, I think SFC had a question: What are the nature of the tests? + +SFC: Yeah. I had a queue item that went away, but so for the stage 3 question, one of the questions that I'm would like to see is now that we have this new language, I said stage 2.7. I'm not exactly sure if you can help me a little bit, EAO, and sort of go over the nature of the tests that you've worked on. For this proposal, given that 90% of the proposal is not in 262, how are you testing this and what kinds of tests are you able to write for it? + +EAO: So the test262 tests are mostly modeled after what was there for imports with type: ‘json’. And effectively, matching how we exercise that, as the new language that we are proposing for 262 is a subset of that. There are a whole bunch of different parts of exercising this. And there's also a test validating that if you for example, import something as text and then you import it as JSON, you get a different module for that. And similar. + +SFC: So is it like in scope for us to test the behavior around textual content? Or is that not something that's possible to test in test 262? + +EAO: It's not possible for us to test that in 262. + +SFC: So in test262, where we're able to test the module graph is well defined and those types of questions. But we're not able to test you have a the example before. You have a WASM file and you load it with import type text. And something should happen. We're not able to test that in test262, but we are able to test is the module graph valid? Is that the kind of does that correctly describe? + +EAO: Yes. + +SFC: Okay. And for example, in import type JSON, are we able to also validate that if you try to load a non-JSON file, then you get an exception if you try to load a JSON file, then you don't get an exception. That's also something that's not able to be tested in test262? + +EAO: No, we can test that because we are doing the JSON parsing in 262. + +SFC: Because we do the JSON parsing in 262. Okay. I'm a little worried that we're deferring so much to HTML when we know that HTML is trying to do something that is harmful for JavaScript. But if we're not able to test it in test262, then I don't see a reason that we can't go to stage 3. + +RPR: CDA eventually wrote some valid code that says if(entranceCriteria.met) { me.supports(Stage.THREE); }. Right. I saw the iterations. + +PFC: I haven't been to it involved in the questions about what can be tested from this proposal so far, but I was wondering if you had web platform tests for the parts that can't be handled cross-engine in test 262. So for example, I'm assuming the HTML integration part of this is going to say something about how bytes are decoded into text. + +EAO: Yes, it does. And yes, there are tests in the web-platform-test PR for that. + +PFC: Okay. + +RPR: So the queue is empty. Would you like to ask? + +EAO: I would like to ask for stage 3. + +RPR: Okay. Is there any support for stage 3? We have support from JSL, and NRO. Are there any objections to stage 3? Nope. So congratulations, EAO. You have stage 3. + +EAO: Excellent. Thank you very much. That's it for me. + +### Speaker's Summary of Key Points + +* The import-text proposal is adding support for a new type: “text” import attribute. +* Its PR #10 is the only open issue, adding some language describing the input of the CreateTextModule AO added by the proposal. + +### Conclusion + +* PR #10 was merged, after some discussion on its specific language, and a small editorial change suggested by MF +* The proposal advanced through stage 2.7 to stage 3. + +## Intl Unit Protocol for Stage 2 + +Presenter: Shane F. Carr (SFC) + +* [proposal](https://github.com/sffc/proposal-intl-unit-protocol) +* [slides](https://docs.google.com/presentation/d/1ECuRD0MU6-Z5DzBJaOi5W3u7f2kf_PZYZU9cXjg5Ock/edit?slide=id.p#slide=id.p) + +SFC: So Intl unit protocol for stage 2. So I've been calling this proposal Intl unit protocol. Nicolo suggested that maybe I can consider renaming it. I have a slide about this. But I'm asking for Intl's unit protocol for stage 2. But alternatively, Intl number format options for stage 2. So let's go ahead and move on. Can I have a handheld mic for this? It's kind of hard to talk through this. All right. It's much easier to hold the handheld mic. Cool. + +SFC: So the background is that numbers ought to be annotated with the quantity they are measuring. And this unlocks items and features. Such as automatic unit conversion. So for example, you might have a message in a message format string. Either in a built-in message format or in a userland message format. Where you want to, for example, display road distance. You might have a placeholder for this. What do you put in the context of the distance field in the format function call? As it turns out, we have ways to pass in dates. We have ways to pass in basically everything else you might need to format except we don't have a way to pass in a number paired with a unit. So as a step in this direction, the proposal is to make the format function be able to take in options bag of a specific shape. So the shape currently what you can do is what's written on the left here. Where you have a style and a unit in the constructor and a value in the format function. Now we can move the value or the unit from the constructor into the format function. And put it in an options bag that gets passed to the format function. I'm highlighting in orange the word value is changed in the stage 1 presentation. I called this number. Now I call it value to emphasize that this can take a number of big and/or a string. Or in the future, a decimal. So I have this slide here. To discuss the difference between options bags and protocols. So when I say the word protocol, what I mean is a well-specified means of passing data to an API. And when I am throwing around the word protocol, I did not coordinate that with the first-class protocol's champions. Who will probably be trying to actually give a definition for what a protocol means in JavaScript. As of now, there is not an official definition. So this is the definition I'm using for the word protocol. Protocol shapes could take various different shapes. It could take a thing that has string-named functions. Thenable is an example of that. It could be symbol-named functions. It could be just a string. A protocol could be like a string of a specific syntax. In this case, I'm doing it in an options bag. Which is an object that has fields. Those fields could be getters. So they could still run code. So this is a specific shape of a protocol. So we considered these other shapes. We don't always take options bags and Intl constructors because we expect that those call sites are used in such a way that these are basically additional arguments and they're largely always going to be in line with the call site. Whereas with this, I'm using the word protocol because we fully expect that this protocol will be implemented on user-led types, on proposals like the amount proposal. So it's important that we think about, well, it doesn't have to be fields. It could be any other shape of a protocol. But we decided to use to still use fields because you can implement it using getter functions if you need to. If you're on a class. But it also is ergonomic to use if you're not using a class. + +SFC: So just an open question. If people have thoughts, I'll loop back to this at the end. I'll go through the whole presentation first. But proposal name, we could call it Intl unit protocol. It's concise. Problem-focused. Or we could call it Intl NumberFormat options bag. It's wordy and solution-focused. Now at stage 2, we sort of agreed on this is the direction we want to go. So we could rename the proposal. I'm going to continue using Intl unit protocol though for the rest of this presentation since that's what was on the agenda. All right. Let's continue moving on. And I'll take questions at the end. + +SFC: Amount versus Intl Unit Protocol. So a question we have is, why is Intl Unit Protocol its own proposal? The problem space is sufficiently separable with different questions and different trade-offs. We felt that the proposals are independently motivated and we can discuss the design questions independently between the two proposals. Some people have asked me like, Shane, you're proposing Intl Unit Protocol. Do you see it as an alternative to Amount? No. The impact of Intl Unit Protocol on Amount is that the Amount proposal will have the getters listed on the previous slide. These getters will be how Intl number format reads from amount. We're specifying in some sense a shape of what a polyfill of amount could look like. When it's plugged into Intl. But these are two separate questions that have enough independent motivation and enough independent questions to be answered that we can talk about them separately. + +SFC: All right. So another change in stage 1 is that currencies are handled as units. So the protocol takes a key named unit for both units and currencies. In the stage 1 proposal, currencies were passed using the currency key. They're now passed using the unit key. But the unit must be 3 uppercase letters in order for Intl number format to actually be happy with it as being a currency. + +SFC: Conflicting units. This is also a change since stage 1. When I presented this in stage 1, I thought, obviously, we should throw an exception. I don't see anything else we can do. I got some feedback from the committee. We went back, discussed in TG2. And this comment from EAO is what we ended up adopting. I'll just read it out. It says, the value that's used as the format argument ought to be defining what we're formatting while the constructor ought to define how it's being formatted. Therefore, conceptually, the examples above are asking to format a value in kilometers using meters as the input as the output unit. Either we can do that and convert or we can't do it and range error. So TG2 looked at this and decided to adopt that position on the question. + +SFC: So what does that mean? It means we'd like to convert. However, a principle that we have in TG2 is we do not add any more functionality that incentivizes programmers to parse localized output because if we were to implicitly start doing the conversion, then people are going to be using Intl to do conversion and then look parse out the numbers in the output in order to get the converted result. This has bitten us multiple times in the past. Time zone conversion and calendar conversion are two examples where I can point to popular libraries and other websites that do that when they didn't have access to Temporal used Intl APIs to do their time zone conversions for them. This was a mistake that we do not want to repeat. Which means that in order for Intl Unit Protocol to do the conversion, we must expose a programmatic conversion API which is the Amount proposal. So procedurally, the proposal in front of you today throws the RangeError. When Amount reaches stage 2, when both amount and Intl Unit Protocol reach stage 2, we will tweak it to do the conversion. And I don't intend to advance Intl Unit Protocol beyond stage 2 until we resolve that issue. So again, it's two different proposals and they will synchronize once they're at stage 2. That is the current proposal for how to do this procedurally. + +SFC: Another slide just to highlight, again, a construction versus formatting. We people might ask like, well, why didn't you just do this from the start? There's two reasons why constructor options are still a good choice in some cases. One is that the formatter well, for why you want to pick what goes where. The formatter encapsulates the locale and formatting options whereas the locale data can be loaded and initialized ahead of time in the constructor. So this proposal helps us get a little bit more toward goal 1 with a construction versus formatting which is that we're doing a better job of passing in locale and formatting options to the constructor as well as instead of passing in formatting options, data in the format function. So we're getting closer to goal 1. But we're regressing a little bit on goal 2. Because previously, we could preload the symbols for outputting a certain unit in the constructor. But we're no longer able to do that all the time. We have to have more data available in order to obey the formats unit. The unit that goes into the format options. It's not zero impact. We can minimize this for example, if we take the approach of doing automatic conversion or throwing an exception. If there was a unit specified in the constructor, we don't have to load any other unit formatting data. That will help us with getting toward option 2. + +SFC: All right. And I also wanted to add a slide here about extensions to the protocol. So I just wanted to highlight that the fields we currently have are sufficient for the cases that we're looking at. But we could in the future consider extending it to do things like rational numbers and other precision styles like margin of error. By having a protocol here, we have room to think about representing these other types of numbers and ways that numbers can represent themselves to an Intl API. + +SFC: Spec text is here. I can go ahead and open that. I will note that although the spec text has been in the repo since March 1st, it was not rendered on GitHub pages until March 10th. And I apologize if you tried to look for the spec and weren't able to find it. If there's anyone here who's like, I needed more time to review the spec, I'm sorry SFC. I'm sorry that GitHub had problems. But I really need more time to review the spec. I totally understand. And I'll just bring this back to May. But I just wanted to go ahead and open that. I can share the spec text. This is what it looks like. The way I did it was I added a new abstract operation called GetNumberFormatInput that does the stuff involving reading fields of the input options bag. I made that a separate AO from get Intl mathematical value. So get Intl mathematical value is now still exists as its own operation. But there's a new one that wraps it. And then the various different formatting functions take the instead of taking just a mathematical value, now take the record and pass that down through the stack. So I don't intend to walk through the whole spec text and the slides today. But if you'd like to browse to peruse it, it is now rendered and available on GitHub pages. + +SFC: All right. That was the end of my slides. I'm open for feedback. We can go through the queue. And then I'd like to ask for stage 2. + +USA: So first on the queue, there's NRO. + +NRO: Yeah. I did not understand why we need to block stage 2.7 on Amount. It feels to me like even if we keep the range error all the way up to stage 4, then we would still be able to change it to not being an error. Like after it ships, that's generally something safe to do. + +SFC: Yeah. The position that we currently have is that we want to do the conversion and we think that doing the unit conversion here is a better experience for users. And we currently can't do it because we don't have a way to do the conversion that is not Intl specific. That doesn't involve parsing localized strings. So we would like to do the conversion. We think it's a better proposal if we do the conversion. And avoid the RangeError when possible. So while you're correct that we could advance it to stage 4, having a RangeError all the way through, that is not the desirable outcome. + +NRO: Okay. I personally still see the value in this proposal even without the conversion. But yes, I understand your position. Thank you. + +USA: Okay. Next, we have Kevin on the queue. + +KG: What is the actual protocol? + +SFC: I can show you on the repo. I can show you in the spec. Like I think this is the slide you're looking for? Yes. It's an object that has two fields. We read the fields using the get operation. So we get the two fields out of the object. If the format function is given an object, it reads the two fields using the get operation. It reads the two fields. One is called value. The other is called unit. + +KG: Is it "value" or "number"? Some of the slides say "number". + +SFC: I hope I fixed them all. It's possible that I missed one. This one uses value. Oh, this one still uses number. I'm sorry. I thought I fixed them all. Yeah. It is value. Okay. There you go. Thanks for highlighting that. Fixed. + +KG: OK, sounds good. + +USA: Next on the queue, we have MM. + +MM: So you said that in order for something to be recognized as a currency, it has to be three uppercase letters. In the crypto world, there are many currencies. And there are many currencies that are indirectly derived from others. What is the operational consequence of being or not being recognized as a currency? + +SFC: All right. So the currency syntax is fed directly into `Intl.NumberFormat`. And there could be currencies that are not formattable. And those such currencies would have the same behavior that they do today. If you try to pass them in. But all of the currencies that are currently formattable by Intl all follow this syntax. + +MM: I see. So the issue is simply whether the Intl specialized formatting can be applied and if it's not recognized, is it equivalent to just being treated as some other unit like feet or something that has but one which is not known by Intl? + +SFC: I believe that's the case. But I will triple-check before I go on the record as saying that. + +EAO: My understanding is that right now what you would in that case be trying to do is format a number together with a currency using currency as a style with an invalid currency code being given. I can't remember if that's a RangeError or a TypeError that you would get in the behavior. This detection happens entirely internally in `Intl.NumberFormat`. So there is no external API that is provided for inspecting whether a specific code is a valid currency code other than trying to use it with number format. + +SFC: Yeah. I just verified that if you pass a currency code that's not of that syntax, then it throws a RangeError. Because it's not known by Intl. So that's what's still happening here. It's possible that we could extend this in the future to support currencies that are not just the three-letter currencies. But as of now, we're just adopting the existing behavior. + +MM: Okay. Thank you. + +USA: Next on the queue, we have ACE. + +ACE: So this isn't like a blocking thing. But it looks like this is a breaking change. Like right now, if you pass an object, we'll coerce to primitive calling like `Symbol.toPrimitive` like value of. Whereas now, the spec immediately goes to only reading the dot value. + +SFC: Yeah. I can show where that is. Let's open that in the new abstract operation. Called GetNumberFormatInputs. So if input is an object, then we do the get on the object. Whereas previously, we'd went directly to Intl mathematical value, which ends up calling + +ACE: First step. + +SFC: Yeah. First step, which ends up calling this. To primitive number. That's right. So yeah. That's correct. + +ACE: That feels like a bug. It feels like if the thing doesn't if it's an object, and it doesn't implement .value, then should it fall back to this? Because it feels like this is where this could be a web breaking change otherwise, right? + +SFC: Yeah. I mean, that's exactly the kind of question that I would like this committee to look at. What I currently have proposed here is that I pass it into Intl mathematical value if the get of value doesn't if the get of value does not or actually is. So it has to if it's an object, it must have the value field. Right. So that's good feedback here, which is like do we want to instead check for two primitive first? Do we want to check for the value field before we check for toPrimitive? Or do we want to do it in the opposite order? What I have here is, I agree maybe not the greatest solution here. So good to highlight that. This is exactly the feedback that we want to get from this committee. + +ACE: Yeah. And to reiterate, I'm happy to talk that during stage two. + +USA: And that was the queue. You have a few minutes. Left. SFC, how would you like to proceed? + +SFC: If there is no one on the queue, I'll go ahead and back to my last slide, which is asking for stage 2 for Intl Unit Protocol. Yeah. With all the understandings and caveats that we have described so far in this discussion. But I'm asking for stage 2. Specifically, stage 2 means that we're taking an object with an options bag. Those options bag have the two fields of the names value and unit. The unit field has certain requirements about the syntax of the unit for formatting of units versus formatting of currencies. We will discuss during stage 2 whether we call toPrimitive first or whether we check for the value getter first. And we don't plan to progress beyond stage two until we are able to get a solid answer to the unit conversion question. + +USA: Thanks for qualifying that, SFC. On the queue, we have NRO first. + +NRO: Yeah. As I proceed to, and I agree with ACE’s suggestion of trying to preserve backwards compact for passing an object in there that would be stringified. Maybe if it doesn't have the properties or something like that. + +USA: All right. Next, we have KG. + +KG: +1 for stage 2, and thank you for doing this in terms of public protocol. I know we'd previously considered reading internal fields. I think reading public properties and having Amount have those fields ends up being a good design. + +USA: All right. And next, we have JKP on the queue. He says support for stage 2, but would be good to address possible breaking change. And that's it. So I don't hear any opposition to stage 2. It's been close to a minute since the last thing came up. So congratulations, Shane, on stage 2. + +SFC: Thank you. Next in the time box is two more things. One is stage 2 reviewers. I guess that's called 2.7 reviewers now. We used to call them stage 3 reviewers. They called stage 2.7 reviewers. I'd appreciate a couple of volunteers for that. + +USA: That'd be amazing. So next, let's open the floor to volunteers. And yeah. Please let us know if you'd like to help. Let's aim for at least two. + +SFC: There's the question I'm currently asking. Stage 2.7 reviewers. + +USA: RGN has volunteered on the queue. + +SFC: It's not a very huge proposal. And it will be a nice little tour of Intl in case you're interested in learning more about the Intl spec. I'm just giving a little pitch. + +USA: Yeah. So my apologies. I was not paying attention to the queue. ACE also has volunteered as they have already started, so you have your two reviewers. I would like to remind you that this can be a lot of work. So if you'd like to help out, you can join in later. Or you can just informally review. For that, we checked either the chairs or to SFC. + +SFC: Thank you, ACE and RGN, for volunteering. I really appreciate it. The last question since we still have a couple of minutes on the time box is this question about the proposal name. Is there anyone on the committee who's strongly opinionated here, perhaps the first-class protocols people who don't want me to use the word protocol in a proposal name? I'm happy to change it. Otherwise, I'll probably just keep it as the same existing name because it's what is already there. + +JHD: So I had had this on the queue earlier, but removed it. The definition you gave for protocols is basically the one that already exists. It's defined by the existing protocols out there, which you listed. Some of them. The only thing that the first-class protocols proposal was trying to do is reify them as first-class objects. And provide affordances API and syntax affordances for them. So that I don't see any conceptual conflict whatsoever. This is a protocol. And if the first-class protocols proposal arrives before or after this one, then it would just create a first-class object for to represent this protocol. So that should be fine. + +USA: Okay. Real quick so we can get the rest of the queue within the time box. Next, we have NRO. + +NRO: The reason I suggested this name change is because some delegates in the past expressed some uncomfort with having a protocol that doesn't actually have a built-in implementation of the protocol. And also, that's only like using one place that's not really a protocol that's understood across multiple language features. But I was suggesting this options bag name because it I think it more closely reflects what the scope of the proposal actually is. But I have no problem with keeping the current name. + +USA: Great. Next, we have CDA. Oh, sorry. Prefer to keep the name if not a strong compelling reason to change. And then we have CM who says simple and clear is better than long and precise. We still have two minutes. SFC, do you want to? + +SFC: All right. Based on the temperature of the room, it sounds like I'll probably keep Intl unit protocol as the proposal name. + +### Speaker's Summary of Key Points + +* SFC presented the updates to the Intl Unit Protocol proposal, including changing the field name from number to value, as well as the currency name from currency to unit. +* The committee is happy with the direction of the proposal and approved the proposal for Stage 2 +* In stage 2, we will continue to work on open questions including: (1) the degree to which we should block on conversion availability, (2) make sure that this is compatible with the web. +* RGN and ACE volunteered to be the stage 2.7 reviewers. + +### Conclusion + +* Approved for Stage 2 + +## Intl Energy Units Stage 1 Update + +Presenter: Shane F. Carr (SFC) + +* [proposal](https://github.com/johanrd/proposal-intl-energy-units) +* [slides](https://docs.google.com/presentation/d/1iwknifQLEIA42KIoveoNLrRZlhEitZQ_JTKADP20rMA/edit?slide=id.p#slide=id.p) + +SFC: Back from lunch, it looks like most people are back in the room. So I think there's enough people in the room that I'll go ahead and get started with Intl Energy Units update. All right. First, some background. I prepared some more slides on this on exactly how we have arrived at the measurement units that we currently have in Intl number format. ECMA 402 contains 45 units of measurement. You can see the link to this in the spec, and I'm going to walk through them right now, how we ended up with what we have. These 45 units were chosen based on a judgment call on TC39's judgment on the motivation either of the independent unit or as being part of a common usage pattern. + +SFC: So independently motivated units. I have started the list starting at number one. These are units that we included because we felt that they were independently motivated. 1 through 8 are all the units of duration format. I don't need to read those all to you. Those are the eight units that we need for duration format. Those were included. Numbers 9 and 10 were because we decided to include units of volume, basic units of volume. We had previously decided to include number 11, "percent", because it is already in Intl number format. Numbers 12 through 22 are because digital units are important for our users. Bits, bytes, gigabit, gigabyte, and so forth. We have included those. We do not currently include mebibit and Kibibit, but there is an open issue to add those if there is enough demand for them. The unit degree as far as I can tell was added because there was a delegate who needed it, and they asked if we could include it, and it was only one unit, so we had decided to include it. + +SFC: And then we have some more units that were added because of usages that we wanted to support. So we wanted to support the area land usage. This term usage at the time was very abstract. It's going to become a little bit less abstract. Now that we are talking about the amount proposal. But one of the usages is land area. So we decided to support that, and there's two units that are used for land area, which are both included. + +SFC: Wait a minute. I thought I was already on the previous slide. Oh, these are also duration format units, but they're also covered by a usage, so I put them on this slide. So we decided to support units for duration of music, track, and TV program, which includes minute and second. We decided to include length usages which is how we get centimeter, foot, inch, and meter. Also road distances for some of the slightly longer ones. Kilometer, mile, yard, and also mile-scandinavian, which is not exactly the same as a mile, but is used to measure road distances in that part of the world. We decided to include rainfall for measuring on weather forecasts and such. We decided to include person mass, and those are the units for person mass. Including stone, which is kind of interesting one because I'm told by JWS that this also depends on if you're in the older or younger generation, whether you use stone as a unit of mass of a person. But it is included in Intl because there are enough people who do use it, and it is in the CLDR data that we do include that one. Temperature has two units that are used for the CLDR usages, and then finally, vehicle fuel volume. Volume of vehicle fuel. We include those units. + +SFC: So these are the usages and units that we had previously decided as TC39 that we want to support, and these are the ones we currently ship in Intl. Why am I belaboring this? I'm belaboring this because the main thesis of this update is that when we add more units, we need to go back and look at where the current ones we have came from and then commit to continuing that bar of the motivation for those units. + +SFC: So kilowatt-hour. This is an issue that was filed almost three years ago. More than three years ago. By @johanrd where they made the observation that we don't support kilowatt-hours very important, especially with electric vehicles. And other usages that we don't currently support energy units. Can we please get these? This has 16 thumbs ups, thumbs ups are not a very good measurement of interest, but it's like an order of magnitude in some cases higher than most of the other issues on the 402 repository. There's only a few issues that have similar numbers of upvotes. So yeah, maybe we should make it a unit, a thumbs up unit, but it's not currently specified in CLDR. So the ticket for adding energy units having that many upvotes means that there's also multiple motivated community members who've been helpful in formulating a proposal. + +SFC: So there's two usages that we'd like to add. Based on this feedback from the community, one of them is energy usage. And here's a specific use case that we want to support. We want to support electricity billing, utility dashboards, electric vehicle charging sessions, battery state, solar generation. Required units for this are kilowatt-hour. I have an asterisk here because I have a slide coming up later to suggest that there may be cases where we need more than just kilowatt-hour. But kilowatt-hour covers these use cases, and you can see some examples screenshots on the right. For what that might look like. + +SFC: The second one is power. So this is charging again, charging stations. This also includes solar and HVAC systems and real-time device power draw. And this requires watts and kilowatts. And there are some examples on the right of real-life examples of these units being used and why we want to consider adding them to Intl. + +SFC: One question I had last time I gave this presentation. Does this update last time I talked about this proposal? Was what are things that we're not supporting? I want to be very clear with this proposal about what is in scope and what is out of scope. This almost clarifies the stage one motivation for this proposal. So what I just showed is what's in scope. Here's what's out of scope. Science and there's a list here. Scientific applications are not in scope. The users who are motivated for this feature are not scientists. They're people who are building electric vehicle and other energy usage applications. They're not motivated by the scientific applications. Therefore, that's why we're not including Joule in the current presentation. That's not to say that Joule can't be added if it's in a separate proposal, if it's motivated. But it's not being included in this proposal. So when I say Intl energy units, it really should be Intl energy usages. It is what we're adding. Utility-scale energy measurements, like you try to measure how much what's the power output of a power station, like an industrial-sized power station. We don't support that usage. We're not supporting the automotive and heavy industry, which adds horsepower. We're not including that usage. We're not including natural gas and heating, although we are including one of the usages we were including was a solar and HVAC. We're not including natural gas and heating because that would require the British Thermal Unit. We're not currently adding that. We're not including batteries and small electronics like milliwatts and mAh, this popular unit used for microcontrollers and things. We're not adding those. We're also not adding nutrition. So these are examples of energy units that have various usages, which are not included in the proposal you see before you today. Again, all of these could be included in a future proposal. But I just want to be super, super ultra clear about what's being included, what's not being included, and why. + +SFC: So the caveat that I had earlier, I had a little asterisk. Here's the caveat. The caveat is that for the use cases that we have identified, we want to do full research on making sure that we support these usages across all locales. The extreme preliminary research that we've done suggests that there's just three units: watt, kilowatt, and kilowatt-hour that support all of these usages across all locales. However, I'm not necessarily fully convinced. I've seen at least anecdotal evidence of certain regions using, for example, megajoules as their unit for billing instead of kilowatt-hours and megajoules at like three kilowatt-hours. They're very close to each other. And I've seen at least anecdotal evidence of some regions possibly using megajoules. We need to confirm this and research this. So that is the caveat for that's why I'm not asking for stage two today because we need to finish that research before we come back and ask for stage two. There's a CLDR ticket tracking this, which hopefully will be resolved within the next few months or so. + +SFC: Another question I wanted to address is relationship to amount. So all units supported by Intl number format will presumably be supported for conversion by amount, which means that if you have a watt, you'll be able to convert it to a kilowatt. And so forth. They will be supported in the amount proposal, as well as the usages. + +SFC: And that was my update. Yeah, we started a little bit late, so I think we may still have like 10 minutes for questions if we need it. But does anyone want the cue? + +MM: Yeah. So there's lots and lots of units left out. Apparently, for good design reasons, but I'm not quite understanding what those design reasons are. I just listed in my question title some examples of things that are omitted. None of these have to do with energy. They're just with regard to the units supported in general. So just the examples: Kelvin, very surprised to see centigrade and Fahrenheit or Celsius and Fahrenheit, but not Kelvin. Knots, nautical miles, light-year, parsec, atmosphere, PSI, yeah. + +SFC: That's a good question. So the list that I gave of usages hopefully clarifies that a little bit. This slide here. These are the usages that we decided to support. So CTC39 looks through a number of different choices of usages or at least TG2 looks through a list of usages, and then we brought it to TC39 and people seemed okay with it. And we spent more time in TG2 going over this, but these are the list of usages that we decided to support. We decided that we cared about measuring land area, that we cared about measuring video duration, road length, rainfall, person mass, temperature of weather temperature, and then vehicle fuel. These are the usages we surprised us to support. We don't support Kelvin because we didn’t decide to support scientific temperature. We only decided to support weather-temperature. Which is the default usage of temperature. Some of the other ones in your list, like knots, I don't think we ever considered what would knots be used for nautical distance or something. It'll be length-nautical or something. Length-nautical is not a or speed-nautical, whatever that would be for. It was not a usage that was considered. Again, you're perfectly happy to consider additional usages, and that's what that's exactly what this presentation is about. This presentation is about "we want to add these usages. We want to add electricity billing, EV charging, solar generation, and real-time device power draw". That's what we're asking this committee to approve. And then if that committee approves those usages, then what follows are the exact units required to support those usages. So the units that you've listed are just not ones that this committee has yet approved in terms of the usages for those units. + +MM: Okay. Thank you. Is there any sense in which any of this any of the systems around units understands that a kilowatt-hour has a relationship to both kilowatts and hours? + +SFC: So that would be a question more for the amount proposal when we start adding unit conversion. As of now, Intl number format and again, this proposal is currently just for Intl number format. It's not for amount. Intl number format just takes a unit and displays it. So it only concerns itself with the localization of the display name. It does not concern itself with conversion from one unit to another. And it does not concern itself with the relationship between them. Amounts will have to go into that territory. + +MM: Okay. Thank you. That was all very clarifying. + +JSC: Yeah. So it seemed like you took a very application-centric use case approach to your use cases, right? Like going back to the slide where you had road lengths and yeah. Things like this. I am wondering about the scope you used to figure out what you're including or explain here. I know you said it's still under investigation etc. But there are a lot of scientific applications that use web technologies that present user interfaces that use a broad range of dimensions and so I'm wondering to what extent would you draw the line in scope here for formatting them? The entire SI unit system, basically all of them are used in some kind of scientific user-facing application and I dare say many of them use web platform technologies. I know of a couple right off the top of my head, e.g., in biomedical applications, especially big electronic health record systems like Epic that now have switched to Chromium and web platform tech, etc. They present lots of scientific or biomedical dimensions. They need to format them. There is an internationalization need for them. But are you drawing the line to include them or not in your in the mission statement of this proposal? Because if you do include applications like that, whether they be electronic health records or other or scientific applications or those kinds of things, that are user-facing. And why would you not include the entire SI system for formatting? + +SFC: Yeah. I can address that. So the list of usages that you found on this slide is a list that came out of a TG2 discussion where we looked at all the usages that are supported by CLDR, which is about twice this number. And we made a judgment call based on which ones we felt were most highly motivated for the web. With the idea that we would do exactly what we're doing now, which is let a user post issues and gather feedback on which things that we didn't include are well-motivated. And this energy units proposal is an example where users identified units that we didn't have on the original list and there seems to be enough ecosystem support for them that we're now considering them for inclusion. We're not including things for the sake of completeness. If we cared about including things for the sake of completeness, then yes, we would include all the SI-based units as well as all the NIST-based units, the American ones. And probably many others. But completeness is a non-goal. Usages are goals. + +JSC: Yeah. So if completeness is a non-goal, and you're and the demand is for concrete use cases in every case, on an ad hoc case-by-case basis, then do you have a list of criteria for that? For instance, off the top of my head, millimeters of mercury (mmHg). Extensively used in blood pressure displays. Every electronic health record system does them. The biggest one in the United States which I mentioned, Epic, uses web platform technologies to format them as mmHg. Is your gut feeling that that would be within the scope of this? + +SFC: Yeah. I think I've got like a minute or two left in the time box. So the short answer is that we're trying our best in a not exactly 100% scientific way to judge the impact of these usages to users. And that's why this particular issue which has a lot of upvotes, which has multiple community members who are offering to put in the legwork to push this proposal forward, has gotten to this stage. What I'd like to have is an exact set of criteria. I think that would be great. That's not a we don't currently have that exact set of criteria. And I'm more than happy to follow up with you, maybe in a TG2 meeting where we can sort of go into a little bit more detail and depth in establishing a very exact quantitative rather than qualitative set of criteria. + +JSC: That'd be interesting. Thank you. + +SFC: All right. We have 30 seconds left in the time box. So I don't know if we have time to get to Stephen's question. + +SHS: What is the cost of completeness? + +SFC: My quick answer to it is that if we did have completeness, it would just increase the size of browsers. And there was pushback in the original proposal which did include many more units that it was increasing the browser payload too much. So that's why we decided to have a more limited set and are working on this problem. + +### Speaker's Summary of Key Points + +* The Intl Energy Units proposal was reframed in terms of the supported usages. +* Delegates would like more quantitative criteria for adding the usages in the future. + +### Conclusion + +* Will seek Stage 2 in a future meeting. + +## Error code property for Stage 1, 2, or 2.7 + +Presenter: James Snell (JSL) + +* [proposal](https://github.com/jasnell/proposal-error-code-property) +* [slides](https://github.com/jasnell/proposal-error-code-property/blob/main/slides-export.pdf) + +JSL: So here, asking for if we can get stage 1 on this error code property proposal. Maybe if we can go beyond stage 1, stage 2, or 2.7, that would be nice as well. But let me set the stage a little bit. Kind of a long-running problem. How do you identify the specific error condition? The convention is to look at the error message. But that's fragile. I once broke NPM, for instance, by adding a period to an error message. Which was fantastic. Or you can look at instanceof. Obviously, with cross-realm, that doesn't always work. You can look at the name. But because of DOMException and how that works, the name doesn't always match. Sometimes it's the class. Sometimes it's the error condition. It's very inconsistent. + +JSL: The proposal here is pretty straightforward. Similar to what we did with cause. Add a code that can be passed in as part of the options bag on the constructor. And that installs it as an instance property on the error object. If the code is not provided, just like with cause, it just is not installed and is not present. You can, when you pass it, you can be checking that the .code property. Okay. Design, again, very straightforward. This is on instances, not the prototype. The type is any value, the normal convention in the ecosystem is to use a string. DOMException in a couple other older errors that are using some of them use number. The common convention now is to use string. But we have really no reason to limit what type it is. So we can just say it's any value. Default absent when not provided. Not enumerable, but it is writable and configurable. + +JSL: Why do we need this? I mean, there's several reasons. The biggest one is it's an already existing path that the ecosystem is already using. I introduced the code property in Node, I think it was in 2017. Specifically to deal with the problems with the fragility of checking error messages and that kind of thing. Since then, we've seen ecosystem libraries, Axios, AWS SDK, a bunch of them. There's actually quite a few that have started to add codes themselves. We see this in Deno. We see this in Bun. There's other runtimes that have picked this pattern up. There's not a lot of consistency in what the actual values are, but that doesn't matter. We're not defining what the value space is. We're just saying that it's there. + +JSL: On the web platform, DOMException does have a code. It's generally considered deprecated. The preference is to use name. Instead, they challenged name, though. Again, it's very inconsistent. Type exception. Or type error, for instance. It's just the same thing no matter what the actual condition is. But when you look at DOMException, okay, you have a name for each individual set of conditions. So name actually becomes a bit ambiguous about whether you can or cannot use it as an indicator of what the actual error condition is. Code becomes very unambiguous. And then the other issue with DOMException, one of the arguments for this is, well, just subclass DOMException and change name to what you want it to be, but then that actually adds more ambiguity. On top of it because it's like you're extending that name space in a non-standard way and it's still difficult. And we end up with a case where instanceof and name don't actually match. Not supposing that we changed DOMException at all. I'm just saying that this is kind of where it does introduce a degree of ambiguity. It kind of comes, goes, I'm jumping ahead in the slides a little bit. + +JSL: One of the concerns that have been raised on a has raised this about DOMException having error in the prototype chain and a legacy numeric code. Is there a conflict here? I don't think so. Code is only installed when it's passed via the DOMException bag. The DOMException constructor doesn't accept the options bag yet. It doesn't pass this. When it's created, therefore, it won't ever be installed. It doesn't actually conflict. With existing web cases there should not be a web backwards compatibility issue. One of the concerns is, well, is it useful for DOMException? Well, DOMException already has a code. We're not changing the value of that. So it should not introduce an incompatibility. `‘code’ in DOMException` is always true since it's part of the prototype. Is it a problem that we're installing it as an instance in it? And code already exists as a prototype? I don't think so. The ecosystem experience shows that these can happily coexist. In Node, we've had code on regular errors as an instance property for since 2017. We also have DOMException in there and it's there as a prototype property no one has ever raised an issue that this thing is problematic. Changes are very straightforward. Just like we did InstallErrorCause, we have a parallel InstallErrorCode. It follows literally all the exact same steps, except in saying instead of saying cause, we install code. + +JSL: The one thing that it does add, it's different than what we have now. SuppressedError doesn't have an options bag now. It's one thing that we talked about yesterday. This would add it. So that we could have code on the suppressed error as well. But everything else is identical to install error cause. I mean, there's some relationship to other proposals. This is not minor. Or not that big of a deal. Won't every library invent its own codes? They already do. We're not changing that. All we're doing is paving an existing path. Why not use symbols? I mean, the things should be fairly obvious. And can't you solve this with subclasses? Yeah, sure. This is kind of the DOMException argument. But again, that does include you solve the ambiguity problem. So yeah. So I'm asking for trying to see if we can get at least stage one. Ideally, the spec text, if you look at the repo, is already there. Again, it literally is just like install error cause. So I'd like to see if we can get it to state 2.7. But let's start with stage 1. + +JGT(?): You mentioned that code on DOMException is already true. Did you mean the literal true or you meant it's just always there? + +JSL: It's still there and I think it should still come up as true when you collect the code in whatever. + +JGT(?): What would false mean in that context? + +JSL: If it just says it's not present. It's not present on the object. It doesn't exist. + +JGT(?): Okay. + +MM: So my question is not a suggestion for changing this proposal at all. I do support this proposal. My question is, in the context of the thinking about this proposal, what's your reaction to the following possible follow-on proposal? Which I would find attractive. Which is, the spec currently specifies in lots of places if this condition is not met, a type error is thrown or a range error is thrown or whatever. So the spec is always specific about what the class of error is. The spec certainly never says anything about what the message is, which I would support continuing to never say anything about. It would be nice in a follow-on proposal for the spec to always specify what the code is for every error thrown according to the spec. It doesn't mean that every throw in the spec has its unique code. We would still group them into some kind of sensible equivalence classes. But there's certainly many use cases where user code would like to distinguish between type errors that mean different things that are all of which are thrown by the platform. So is there anything about your thinking that led to this proposal that would be intention with that extension? + +JSL: It is definitely a possibility. I believe it's something we need to approach very carefully. Because it can get out of hand very quickly if Node's experience is something we can draw on. Node has hundreds of codes now. Some of them very generic. Some of them way too specific. And so I think we would need to approach that part of the problem very carefully. But I think there's definitely room for that possibility. But yeah. + +PFC: I was thinking the same thing when I was reviewing this proposal. And there's kind of a strong advantage and also a strong disadvantage to doing that. The advantage is something I've noticed with Temporal: each implementation has their own error messages for what's essentially the same error. And some are clearer than others. So it would be helpful to have a code that you refer to that would be on MDN and provide you with a longer explanation of the error than would fit in the error message. So that would be great. The disadvantage you should note that it would make a lot more things observable in the spec than they currently are. So there's a lot of room now for implementations to reorder steps that all throw a RangeError as long as they don't reorder that observably with something else. And I know that at least for V8 adopting the Rust library for Temporal, that was quite useful to be able to reorder the steps that throw the same class of error because one would be on the JavaScript side of the boundary and the other would be on the Rust side of the boundary. So that's a freedom that we shouldn't give up without thinking hard about it, I'd say. + +MM: But just if the range errors are not something that user programs need to distinguish among, then we could specify that all of those range errors, those specific range errors, that all of them carry the same error code. + +JSL: Yeah. I think it just comes down to this is a namespace. And with all namespace issues, we have to carefully think through what the ramifications of assigning those values are. And in some cases, they're going to be useful. In other cases, they're not. So yeah. + +PFC: Yeah. There's a trade-off there between merging error codes so that they're not observable versus maybe it's helpful to users to be able to distinguish those error codes. So there's not an easy answer, I think. + +MM: Yeah. The amount of footwork required to do this extension would be tremendous. I mean, so I'm not eager to but the world on the other side of this extension would be a very nice world. + +KM: I’m reiterating the same point. At least in JavaScriptCore, which type errors or which errors in the spec correspond to a particular line of code where an error is thrown is extremely ambiguous. And it would be an enormous amount of work to go through the millions of lines of code where anything can throw an error and figure out exactly which error code it's supposed to be attached to that line. Is that the right error code? And I agree with the fact that, yeah, there's plenty of times where it's very handy to just say that all errors are the same. And we know all of these produce the same error. So it doesn't matter which of the errors you get. Because the contents are irrelevant. + +JSL: Yeah. I very specifically did not want to cover this whole namespace of errors here because I'm just experienced with Node. We've been doing this for nine years. And we still have issues with making sure the right error code is thrown at the right spot. So its code is open. It's a paved path but there are complexities with it. So if we do take that additional step, we have to approach it very intentionally and very carefully. + +KM: I guess also and that'll be a huge change in the actual spec itself, right? We'd be talking about essentially going through every line of the spec and looking for errors. And then seeing if those errors what the right code is. And we'd have to go through and debate that whole thing with everybody at this committee. And then it's also hard to tell if those errors would be the same because there are a lot of them are abstract operations. And those abstract operations can be called from any other abstract operation. So we wanted to say, "Oh, well, it's fine because most of the errors have the same code. And you can do the collision." You still have to go through all possible calls. And anytime a new proposal comes in, does that split them in some way where now a distinction again? So it creates an ongoing kind of in perpetuity overhead that I think we should be careful to agree to before we're ready. + +MM: Yeah. I certainly agree with all of that. This would be a tremendously expensive road to go down. But it would be very nice to be on the other side of it. + +EAO: The angle I would take would be to start to consider how we get to the other end of the road, as MM put it just now. I think it should be a stage one or maybe stage two consideration of this proposal to determine whether there's an expectation of a follow-up near-term proposal to be adding codes to spec-mandated errors. The concern I have here is that with this change, we're effectively communicating to developers that it makes good sense to attach a code to an error. And it is presumable that more and more of this will happen as is already happening. If we are not differentiating spec-mandated error codes from other error codes that are entirely user-defined, we may end up in an unfortunate situation with an ecosystem adopting certain codes that we end up determining as a committee that for the spec don't really work. + +EAO: This creates conflict that I don't think needs to exist. And therefore, I think it would be appropriate within this proposal's early stages to determine whether or not there's an expectation to follow it up with spec-mandated error codes and either way react accordingly. + +JSL: Okay. Let me ask you this. What stage do you think that that needs to be addressed by? + +EAO: It would be nice if it was a stage one concern, but at latest stage two. I am really fuzzy about exactly where this would go. The thing I'm concerned about is not anything specifically in the spec text proposal that you have, but more about the implications around this proposal. And I think if it goes beyond stage two, then the signals become very strong that this is going to become a thing. So does that answer the question, hopefully? + +JSL: Yes. I think so. Thank you. + +WH: I have the same concern. I assume we're discussing the namespace land grab which is going to take place once the code is introduced. After that we may have trouble agreeing on what the codes of spec errors should be because of various claims on the namespace by external sources. + +JSL: Yes. That already exists. To a degree. If we look at all the examples that do exist in the ecosystem, there's very little consistency here. And the risk of even these competing with each other is significant. I actually think that this is another one of the reasons why .code not being limited to just string is useful. Because we might decide at some point that, hey, maybe a symbol is actually the right code for us to use to avoid that namespace land grab. And we can make them unique. This is part of why I think what I said earlier is this is a very difficult and careful conversation we need to have. If we decide to go down the route of defining our own codes. I have no problem addressing this in stage one or two for this proposal. I just would be I wouldn't want to go down the road of actually starting to define what this namespace is if we want to carve out one particular corner of it or at least call out that we want to. Okay. But I think a much more difficult conversation for us to have is to define what that actually is. + +RBR: So the argument for it was to make the error handling consistent similar to, for example, instanceof something like that. I don't see that here. Because it's one way of detecting something is an error. Okay. But that is the possibility to do so is there right now. It's actually so often used, as you showcase also this here in this list, that I'm in favor of having this as a kind of syntactic sugar for the possibility. And also making clear that who assigns the code is always the author of the error. So therefore, they are the only one to determine this, out of my perspective. I'm in general in favor of having something like that. Because it makes it nicer to create an error instance for our users. But I don't really see any further benefit from the current approach. Also, when I think about Node.js errors, for example, I'm coming back to this instance of versus other name detection things. I normally don't need the exact error code. I normally need one of these 10 error codes. So that's a very different problem, though. And I believe we should separate this a little bit. And make the proposal very specific about what it addresses and what it does not address. + +JSL: Yes. I agree. I mean, this proposal as it is is very much a syntactic sugar over what is currently the existing pattern. There's nothing here that can't currently already be done just by assigning a code property. That's fine. Just like with cause. I mean, there's nothing that says that somebody couldn’t already do it. Everything else that we've been talking about here about the namespace of actual error codes is a much bigger problem. So it's like I very much want to separate the conversations as much as possible. + +ACE: Yes, you could do this in Userland. But structuredClone won't copy that thing across. When we added cause, we also added cause to structured clone to copy over cause. So yeah, it's more than just sugar. Obviously, we could update what we're structuredClone to say, oh, if it has a code, copy that across. But to me, it'd be odd doing it only there if there's no mention of it in TC39. It feels best to do it in both. + +JHD: I mean, unless we're planning on adding codes to built-in errors, which it seems clear from the discussion that we're not, I don't see why the namespace thing is an issue. That's no different than string method names or string-based protocols. Or global symbols, right? You can put an object or a non-global symbol on code if you want. That's one way userland can address it. You can do Java-style com.whatever strings if you want. You can do arbitrarily large numbers. Or you can put big ins in there. That's a problem userland may need to solve. But also, it doesn't really matter. Because most things aren't going to have conflicting error codes coming from the same exception source. Right? Library A might throw something with code foo. And Library B might throw something with code foo. But those are different call sites. And you'll know that they're different because they're different lines in your editor. And it doesn't really matter if their codes are the same. So I just don't think this is that interesting to discuss. And I definitely don't think it should obstruct the progress of the proposal. + +RBR: I agree to what JHD just said. And about structured clone, and how the code property is currently added as a own property normally. So it would also be copied over in most cases. People just assign a code. And errors, structured clone does not copy the own properties. + +JGT: First of all, I think this is super interesting. I know I've personally spent much of the last six months trying to deal with logging errors in various parts of a code base I had nothing to do with. And I'm angry about it. And so I think having any standardized place where more information about an error can be obtained and generically logged, even if you have no idea where it's coming from, is actually a very helpful thing. And so just having a standard name where assume you have a message you want to put in your log saying error and obviously, the message properties are always there. And to also have a code property that's already there and put that in some structured way, for some other piece of your infrastructure or some person to digest, seems like a very useful thing. And having it spread across 10 different named properties and needing to try to figure out which one makes that whole process more complicated. So therefore, I support this. + +JGT: One thing to watch out for with these kinds of schemes is that novice developers can sometimes assume, let's say we have exhaustive docs in the spec. One of the suggestions somebody made earlier was every time we throw an error in the spec, we would also have a code there's a slippery slope there where a naive user could be reading MDN, could see that, oh, this function calls has errors with this code, this code, this code. And then they will write their code saying, if code equals whatever, then do this. If code equals whatever, do this. If code equals whatever, do this. And they will not think there might be a possibility of some other thing happening there. Right? And certainly, I would say the biggest failure of error handling that I've seen in our company's code base has been exactly that, where people assume that the only errors that can pop out of something are within this narrow range. And anything we do that would encourage more novices to think like that would be a challenge for the reliability of the web. So I just want us to be really careful. Not unlike the other voices that have talked about care for different reasons, about sort of specifying too deeply what we're doing for errors because it can backfire. + +JGT: One thing, and this is not really in scope of this, but it is related, is that as from five years of writing Temporal, one thing that has been really clear is that I would say all of our errors are either range error or type error. And those errors aren't particularly descriptive of the kinds of errors that can happen. And so one additional thing beyond this that might make JavaScript a better platform for dealing with errors would just be maybe four, five, six more common types of the kinds of errors you see. I spent a lot of time in Python lately, which I'm not a huge fan of. But one thing I think they do well is there's just a wider set of common error types that can be used. And I think that might be a middle ground between the we need to spec every error versus we don't give enough information for users about what's going on. So I'm not the guy to bike shed what those common error cases would be. But if somebody were to bring a proposal in the future, I think that would be a really interesting option. Because today, what I see is there's range error and type error. And then there's a long laundry list of 10 or 20 very specific narrow-form errors that are emitted by proposals. And a lot of times, it seems like proposals come up with these very narrow errors instead of being able to leverage a sort of mid-range kind of thing that is shared across the platform. + +NRO: Even if we don't want to go in detail about how this error code is Temporal PlainDate time from wrong date type or something like that, we can still have larger categories like invalid argument error while still representing them as error codes. So the two things do not exclude each other. We don't need to go through the prototype approach, especially given that changing the prototype of existing errors is very much likely to be a problem. + +JGT: And that was actually specifically what I was referring to, is if we had had 5 or 10 more common cases like that, then the need for unique codes on things might be a little less. + +EAO: Just a thought that most of the issues around code, I think, are namespace issues. And we could theoretically impose a restriction on the string values that we accept for the code argument in the options bag, for example to include a “:” colon. That would effectively create namespaces that people would use. But I also realize that this would conflict with most existing code values such as the ones used in Node.js and so on. But it's a potential solution that I think we could technically do that would solve namespace issues going forward, at least. But I'm not sure if we want to consider something like this. + +JSL: Yeah. My deep fear on that would just be that I don't think it would be web compatible. It would just break too many existing cases if we tried to enforce that. I think the stronger option is less ergonomic, the stronger option is we already have a type 4 namespacing symbol. That would be an approach we could take that we could enforce. But it's from an ergonomics point of view, it's kind of terrible. But yeah. + +EAO: To specify, the idea I had was only for the new API that we introducing. And to otherwise allow the code as it is already being set to be set to any string or any value. + +NRO: Yeah. Just symbols. Remember that when you implicitly coerce symbols to strings, it throws. And it's very common to try to convert an error to string to log it somewhere. + +JSL: Yep. + +CDA: All right. Let us do a call for consensus. On the queue, we have ZB support for 1, 2, and 2.7. JHD, did you want to speak, or should I just read what you wrote? + +JHD: So I do support all those stages. I'm volunteering to be a reviewer. And I have looked at the spec. And reviewed it. I don't think we should put any requirements on the code, especially since we're not requiring it to be a string. I don't as I've stated already, I don't think namespace issues are a real problem. Or one that we need to solve. And I'm looking forward to standardizing this very common pattern. + +CDA: DJM is on the queue with +1 for stage one. Next is KM. + +KM: I think there's still at least on my end, feels like there's a lot of areas to flesh out. And I'm hesitant to make the commitment that I think it's going to make it into the standard until at least some of the broader especially with HTML integration as AVK brought up. So I'd like to talk over with him in a little bit there. That only came up in the last few days. So I haven't had a huge amount of time to discuss it with him. But I'm okay with stage one. Seems reasonable. Looking at areas to explore. I spent a little bit of time to kind of work out some of the other stuff. + +RBR: I leave the problem spaces. So common then just even if it would only be sugar, which I learned that it is not. So I like that additional benefit. It's good for stage one. For stage two, I would like to clarify that it's not that we are going to plan on adding error codes probably to or that it's not at least in this specific proposal that we're going to add error codes to built-in errors and similar and it's really just about then handy functionality part there. + +CDA: CZW says, "Support for one and two. Happy to be a reviewer." EAO, has plus one for stage one. MM(?) (Mark) is on the queue plus one for one, two, and 2.7. I think we're just going for one at this point. For the record. And we have run out of time. I do note that there's a couple of things in the queue. So maybe we could do a continuation if we. + +JSL: I don't know if a continuation is required if we're staying at stage one. And I think both of these could be opened as issues in the repo. And then we'll address them. + +CDA: For NRO and Justin, please raise your thoughts in the error code property repository. + +JSL: And I will get this transferred over to the TC39 organization tonight. + +### Speaker's Summary of Key Points + +* The `code` property on Error is an established pattern in runtimes and frameworks, it makes sense to codify it +* Add it as an optional instance property following the same pattern as `cause` + +### Conclusion + +* Stage 1 approved. James will initiate move to tc39 tonight +* Pending some additional conversation about whether and how we should approach defining codes in the language + +## Update on ESM Phase Imports + +Presenter: Guy Bedford (GB) + +* [proposal](https://github.com/tc39/proposal-esm-phase-imports) +* [slides](https://docs.google.com/presentation/d/1inTcnb4hugyAvKrjFX_XHTnxYaPW0rodWwn0VFmlq24/edit?usp=sharing) + +GB: Okay. The last time I gave an update on ESM phase imports was, one, just over one year ago. And, so I wanted to just make this a general update, as a refresher for folks who are not familiar with where we got to in November. Actually, it was November 24, and a couple of normative changes as well. Please, feel free to ask clarifying questions that are on the topic. If it is about an idea you just had on a slide you just saw, I'm probably going to address it shortly, so save those for, for later on. But if it's a clarifying question, I can take them at any time. So I'll start off with a refresher of where we got to with, well, where we are with WebAssembly imports and source phase imports. So this is a complete recap of the source phase imports proposal, which is separate to this proposal, for those who are not familiar. The WebAssembly as an integration, which is phase three in the WebAssembly process, specifies the ability to import WebAssembly in the same way that you can import JavaScript. And it's useful because you can tree-shake WebAssembly. You can statically analyze your code and tell what WebAssembly you're loading. And it integrates into the browser security model, in that you don't need WASM unsafe eval, in order to execute your WebAssembly. But many WebAssembly embeddings and toolchains require more fine-grained control than this when they execute their WebAssembly. They actually wanna control the WebAssembly instantiation itself. And so for that, we have source phase imports, which is this extra syntax to import source, the WebAssembly module, and it gives you not the instance, but the compiled WebAssembly module, which you can then instantiate, passing custom imports or import wrappers, and then wrap the exports before creating a JavaScript wrapper module that basically exports the functionality and wraps the WebAssembly. You still get the tooling benefits, ergonomics, static analysis, your bundler can easily handle the WASM instead of having to embed base64. It's unimaginable that you'd ever do that. + +GB: And, security model integration. And so this is the flexibility. And then embedders can choose which model they wanna use. If they need the flexibility of custom instantiation, or if they want, the, the, the closer integration of instance imports that support tree-shaking. + +GB: So just an overview of the status. WebAssembly as an integration with source phase imports went to phase three in the WebAssembly CG in April '24. As an integration, landed in HTML for importing both JavaScript and WebAssembly. In September '24, source phase imports reached stage three in October '24. And the Node.js source phase implementation was released and flagged in February '25. We now have implementation support, in WebAssembly tooling, WASM branching and scripting, JS tooling, SWC, esbuild, OXC, rollup.js. I just got a PF for that a few weeks ago, which should hopefully finally land now. We've got Node.js support for both phases. Deno support, for instance phase and Cloudflare Worker support for the source phase. And browsers, we're waiting on you. So I think we're, we're close. And I, I know that, Chrome has a complete source phase implementation that's just ready to go. So, yeah. And, and, and the, the actual instance phase is unflagged in Node.js as well. The ability to just import WASM. And so, I'm gonna do a very brave thing here and give a brief demo. + +GB: All right. So this is a simple example of a WebAssembly module that you might want to execute. I'm importing the string, the built-ins, functionality in WebAssembly here in my WebAssembly module. Where I can cast from an external ref to a string. And I can concat two strings. And then I'm importing from this WebAssembly string constants import, the "hello" string. And what this does is basically just gets as the same as if you were compiling, you know, constants in your data section, this but this becomes an actual JavaScript string constant available on the import, and it becomes this hello global. We then have a greet function that's gonna take a name as an external ref. We're gonna call the concat function and concat the hello with the string cast of the local external ref that's the name. And so we're gonna get this basic, greeting concatenation. So if I want to run this WebAssembly in Node, I first need to compile it. So I'm going to compile like so I've just got that one WAT file. So if I go `greet.wat` and output `greet.wasm`, and then if I create a test I want to, I'm gonna import source. The module. From the `greet.wasm`. And then I'm gonna go `WebAssembly.instance.mod`. And then that's gonna give me my greet function. With a new. And then I'm going to greet and greet source phase imports at TC39. And if I run that with Node, I get nothing because I have to log the string. And we get hello TC39 source phase imports. So really simple WASM module. And it, it worked. + +GB: If I want to use the instance phase instead, I'm just gonna put this in a block so that it's scoped. I can go import greet from `greet.wasm`. And then I can just greet and again, I'm gonna log that. And that you can see is a tree-shakeable variant. And here we have the instance phase working. So you've got the control in the in the new instance where you can pass custom imports here. Note that I didn't have to pass any of those built-in imports because these are removed at the compilation time and they're not actually part of the module imports by the time you get it here. So these strings are all, the string handling here and the enabling of the WASM built-ins is all part of the compilation. Including recently, the string constants functionality as well, which got, consensus in the WASM CG for a string constant name, which was possible because we now have compact imports, which is the same thing JavaScript does where you don't have to repeat both strings twice in the binary format. Yeah. That's the demo. All right. Let me resume. + +GB: So what we're talking about today is none of that. That's all complete. We're talking about the ESM phase imports proposal. This was frozen February 24. Reached stage 2.7 in December. Sorry, November. And has been incorporated into the module harmony process. So it layered with the other module harmony specifications. And this is a low-level proposal that kind of directly enables the primitives for module declarations and compartments. The motivating use case is workers and creating workers from modules, solving ergonomics and, and, static analyzability for workers. So today you have to create a worker from a URL. You have to figure out where the URL is. And then you have to set type module. And Bundler is kind of don't easily find these things. And the idea is that if we have a source phase import, which can behave like a module handle, or a module capability, you can just pass it to new worker and it can automatically know you have a module worker. And, Bundler's can see that and more easily understand what's going on. So yeah. The ESM integration and the source phase for WebAssembly completely self-contained. This ESM phase imports is the next step that comes after that and completes the source phase model for JS mod modules as well as, the existing WebAssembly module. + +GB: Okay. So for the discussion today, I would like to give an overview of the proposal, and the exact semantics, as a as a full refresher, and then, go through the transfer and import keying semantics, to then have the context to discuss the two normative PRs that I would like to get, consensus on if possible. If folks feel that we don't have the context today to make that decision, I'm happy to bring it back. but it would be nice if we can build up the context. And if people could share any questions as, as we go. so the ESM phase imports proposal, specifies the source value for JavaScript. where the source phase imports proposal specified, getting a WebAssembly module and adding abstract module source into its prototype chain. With ESM phase imports, we would be getting a module source, sorry, the slide is wrong. I it is not an exposed global actually, is it an exposed global? I haven't checked the spec in a while. I, I think it is. + +GB: I need to double-check that. We did discuss it at plenary and we decided what we wanted it to be. I'm sorry. I just had a moment of doubt there. But yeah. ModuleSource, is the object. And that also has AbstractModuleSource in its prototype chain. And so what is a module source? It's a JS object that has this internal slot of the source text module record, which points back to the source text module record itself. And then that source text module record in turn has a module source slot, which points back to this module source JS object. And that's separate to this host-defined data on the module. it's just an opaque object. That's used to represent the module. It also represents the module's key because you have a, a reference to the module and the module has its host-defined data. It also associates with the URL. And, module source is specific to JS modules. And it's, it's something like a module handle. we did discuss whether it should have reflection for reading the imports and exports of the module. But that was deemed out of scope for the proposal. It could be added in the future. The constructor is also not going to be specified and likely will never be specified, because it is a new evaluation vector in the language. And there is resistance to adding new evaluation vectors. It would instead be possible with inline syntax to use eval to get a module. Or possibly, once we have the safe forms of creating these things, with inline module definitions, reconsider if we can do a constructor. But I think we likely aren't going to. + +GB: And this object supports usage in dynamic import. It has very well-defined keying semantics in the host. And it supports serialization in post-message and between agents. So what happens when you dynamically import a module source? It looks up the module source record from that internal slot, gets that module record, and makes sure that it's executed and that's basically it. So that was relatively straightforward. and the way this works in the spec is we update the import specification to check if you pass an object. And if we do, we have the spec method get module source module record, which will either return the module record or return not a source. And that will check the source text module record internal slot. Falling back to a host get module source record, which is the WebAssembly source phase, integration because it is an, embedded defined module source. So when you pass a WebAssembly instance, you're going through and, and you're failing the first check, which would apply for JS modules only. And you're hitting the second check. So the same thing would be defined for WebAssembly modules. So that is this specification by defining dynamic import for the source for, for JS source phases in turn em defines it for WebAssembly module as well. So you could actually, from a WebAssembly module object, just dynamically import that handle to the module in the same way. + +GB: There is a PR for the WebAssembly ESM integration featuring this functionality. The semantics have been discussed, and, gained consensus within the WebAssembly community group. And so the WASM aspects of the proposal are waiting to land only on the JS proposal progress side of things. In theory, you could use WebAssembly type imports to have something like module handles in future in WASM. And actually have, like, first-class module handle types, but that's all hypothetical for now. + +GB: The keying semantics are one of the most important aspects of the proposal. And, quite a lot of work had to go in to make sure that this all makes sense. If I import and now we, we, we no longer just have URL keys in the registry. We also have source keys in the registry. So if you import a URL, you get the instance for that URL. But if you import that source, you get the same instance. And you're not gonna get a new instance. So you're, you're getting the same thing for both of those imports. If you postMessage an import to an agent, and then import it in that agent, that's actually injective into the registry in the sense that it creates that source that you passed into the registry. So this is the first case where we've actually had, like, a, a network bypass, effectively. And you create a new, new instance in that agent. This is a RACI semantic because if you have already imported that URL inside of the agent, but it got a different source due to some network-specifics being different in that other context, then post-message a source that happens to have the same URL key and import that, you're gonna get a different a different value. A different instance. And so the question then was, how do we figure out how to disambiguate that when you're in a worker context and you import a URL, that you know what you're getting? If you import that URL, what do you get? How do we know which key you get? And the idea there is that there is a canonical source associated with every URL. and that is the source that you're importing. So this is the canonical module source for a given URL. Sorry, that should say a.js. And that canonical module source is, is defined in, in, in a well-defined way and then the question is, when do you use the canonical module source versus injecting a new module source? And we came up with this semantic of using a module source as equal definitions that you can equate two module sources and determine their equality. So that if you transfer the same module multiple times, you can see that it's the same module and figure out its equality. Another way to look at this is as two separate registry maps, the URL to the source and the source to the instance. where the URL to the source is the URL to the canonical source. And every source has a unique instance. are there any questions so far? I can check the queue. + +RPR: There is from NRO. + +GB: thanks. NRO, did you want to speak or can I just quote that? + +NRO: Just quote that. + +GB: Sure. So NRO, asks if the canonical module source is per realm or per global. it is associated with the module registry. And right now, the module registry is per realm. and, but I, I will get to the realm discussion as one of the normative changes that we're, we're looking to discuss as well here. NRO, did that answer your question? + +NRO: Yes. + +GB: Thank you. So this then fits into the module harmony layering. So far, as, import defer, source phase imports and ESM phase imports are all phase proposals. But the semantics of this ESM phase imports proposal from the basis of inline modules and, and the primitives for module declarations and so when we're figuring out these semantics of importing modules in different contexts, that's actually starting to fill out the details of what compartment imports would look like as well. And so it, it's got a lot of crossover with, with other module harmony proposals, but has been layered very carefully. And we've had all the very detailed discussions there around that. To show you a little bit what these layerings look like, with module declarations, the object that you would get for the module, its representation would be the module source instance as specified here. And then when you import it and post-message it, you would have the semantics as specified here. So whoever is gonna work on module declarations will have a much easier time of things. So it's in module source and as abstract module source in the prototype chain. + +GB: Compartments each compartment has its own module registry. And dynamic import is scoped to each compartment. So the semantics of this import that we specify that are scoped to the registry can extend to compartment registries as well. And finally, the HTML integration there is a draft PR up, which specifies the HTML integration and the serialization semantics. It is still draft. It is not complete. But it covers the majority of the semantics in specifying this new worker behavior. There is actually a separate PR by NRO to support an import map option in the options bag passed to new worker. And with this, you could customize the import map of the worker when you construct it. And what we would really like is if we could land these together. What we could say is when you pass a module to a worker, the registry context of the module you pass, that import map associated with that context should also be passed along with it. And in that way, we could have this one expression support transferring resolution; it would be a snapshot of resolution at the time of worker construction. but, but would then actually get the usability required to have imports just work with, with the import map resolutions. + +GB: The semantics of this new worker behavior and, and our hope for the import map were presented whatnot meeting, in '24 prior to stage 2.7. but the, the both of those PRs, st-still need, implemented progress, to be able to, get, get them get them over the line. And, NRO's PR as well. + +GB: So just to summarize, what the proposal includes, ESM phase imports specify module source, for JS. Which complements the existing source phase imports for WebAssembly. It defines the transfer and dynamic import semantics. And fully specifies the host loading invariance. For module keying based on these semantics, considering the edge cases of module handle transfer. Then the HTML integration specifies the new ergonomic new worker and the proposal layers within the harmony process. And that's what we got to at the end of '24 with stage 2.7. So finally, for the new parts, that's all everything I presented is over a year and a half old, right? So what, what, what I'm presenting today is the test 262 coverage. Two normative changes that I'd like to discuss one change to the keying semantics and one change to the cross realm import behavior. And then I, I'd also like to hear feedback on source representation for built-in modules. + +GB: So on the test 262 side, we now have a, a draft test 262 suite PR. It is still a very rough draft, but it includes the, the, the general semantics of, of how to get coverage on this proposal, which just figuring that out was a little bit tricky. But the key integration points there is having an intrinsic for constructing a module source. So basically, we, we need a module constructor to test it, even though our modules constructor won't be specified. and so it, it has that that takes both a source and a specifier ID. and then to in order to actually validate what you're getting, another intrinsic defining an import meta intrinsic hook so that we can sort of stamp these, these, these import metas and, and, and check IDs between modules so that we can verify the invariance. And with that, we can actually test the, the keying semantics across contexts. Based on the two normative changes today, which, which enable this. And but it, it doesn't include any of the existing source phase imports you know, thr-third party source phase imports tests, which were separately handled in, in the source phase imports process for Wasm. Okay. so with that, I will get into the details of the two normative changes. Before I start, are there any questions on the? Okay. . + +RPR: So, so just there's a point of order first. There is a person on the call that has just dialed in. Julian Espina Delan-Anhel? We're, we're not sure oh, JWS. + +RPR: Ah. Okay. I've, I've, I apologies. Yeah, I, you did communicate that. I'm, I'm thank you. All right. Let's please, please go on. s yeah, I'll take the question. From CM. + +CM: Could you go back to the slide where you had the two-stage mapping thing? + +GB: This one? + +CM: Yeah. The URL to source and then the source to instance registry maps thing: I’m just trying to understand what that means. If two different URLs lead to the same source, they actually end up pointing at the same thing? + +GB: They would point to the same instance. + +CM: And so the source is identified by, by what? By the full text of the source text or a hash of it? + +GB: Yes. So that's normative change number one. + +CM: Okay. + +GB: It's a combination of the, source text equality, which so that's the module source is equal, and the host defined metadata, which includes the URL effectively. but your question exactly leads into my normative change number one. + +CM: Okay. I was wondering if one of those normative changes might relate to that. + +GB: I will dive straight into it. + +CM: Okay. Carry on then. + +GB: Okay. So, this is, basically the, the concept of what happens if, if you post the same source twice, into an agent, and you're gonna get two sources that have different that they aren't equal. Because they have two two different lease serialized objects. But when you import them, you're gonna get the same instance, even though they were different module sources, because they satisfy equality of both, both the host defined metadata and the source text equality. So they're, they're de-define they, they, they support equality by the two key equality constructs that we created. And this gets to the heart of CM's question as well. by defining equality based on source text equality, using this module sources equal, you can allow a basically forming an equivalence relation between instances across agents. And we thought that this would be something useful to have, because originally there, there were discussions in, in the structs proposal about a concept of can we can we form a concept of module identity between agents? But at the end of the day, there was no genuine semantic or, or actual use case that was formed there at all. And so this was a completely hypothetical use case, and design aspect that we added. That served no benefit at all. And in fact, the behavior is, is difficult to implement difficult to test. And somewhat counterintuitive. And so the, the normative change is if we can do a better form of equality here. And, and the idea is, instead of using source text equality, which like you know, two sources might look the same, but you add a one white space and they're different. It's, it's a terrible concept. So use the, the actual object identity of, of the source phase object representation. And that's the first normative change. it no longer allows us to create identity of instances, between agents and contexts. But there are no use cases for that we know of. + +GB: So, normative change 58. instead of using module sources equal, which compares the source text just use the object identity of the module source object we already have. And so that becomes the direct key in the source key. an implication of that is that structured clone will now give you a new instance key. And so if we go back to this example and we post message the same module twice, you, you got different objects. sorry. I the slide's wrong. That's exactly what we're getting is non-equality. so you're, you're gonna have two different two different instances now. And so we're, we're basically fully making it well defined on, on the, the source object instance identity. so if you do structured clone before that would have given you the same instance. Now you'll get a different instance, a, a different module. And for, for the most part, it's a much simpler semantic, much more intuitive semantic. And I, I think and, and, and also allows test 262 testing to work very nicely on, on these semantics. so what's happening when you create a new source is it's just a completely new source key. You import the new source key and you're getting a new instance. And it's that simple. + +GB: The next normative change to discuss, which builds on top of that, is the cross-realm import case. We initially actually banned cross-realm imports, because it wasn't clear how to properly define them. And so we just said if, if you pass an object into an iframe and you try to import it, you're not gonna get anything. so what we want to do is actually just relax this constraint now, by using the source object identity the object has a perf-perfectly well defined identity no matter which context you're using it in. Or, or which realm you're using it in. And the same could, in theory, apply across departments. That, that same identity model works perfectly fine across compartments. And so the idea is instead of the current spec text, which throws when you if you try to pass an object that's not the same realm, we would actually just relax that check, use the object identity, which is just the natural specification path. So there's no custom definition required for cross-realms to work. It just works out. And we would get this kind of cross-realm semantic, where you get instances per, per realm. and I don't know why I have that slide. Yeah. So that's it. and so what I would like to do is seek consensus for the two normative changes, if possible. Any questions or discussion are very much welcome. So the, the first one is this module source object identity instead of source text identity. Nicolo has reviewed this change. and so I would like to now put that to discussion. Okay. We've got a few items on the queue. So starting with Kris. Go ahead, Kris. + +KKL: For one, this looks great. But you mentioned earlier that there was that we needed to expose a test function for module source construction that accepted an arbitrary source key, I think, or specifier as a second argument. + +GB: The second argument was a ID string. The the actual URL in place of the URL. + +KKL: In place of the URL. Okay. And it's and it's clear in the way that this is framed in test 262 that this is not how we would propose to surface the module source constructor proper. + +GB: Yes, it's a test 262 in, intrinsic that's not a public function. Yeah. + +KKL: All right. That sounds good to me. RGN? + +RGN: Thank you for the clear explanation of a very elaborate topic. I'd like to express enthusiastic support for both of the normative changes. They make a lot of sense and I'm strongly in favor. + +GB: Sure. + +RPR: NRO? + +NRO: Yeah. so first okay. We are removing the utilities that we are hoping to build for shared structs, but just note for the rest of the committee, we could always add those back, it's just they would not affect module imports. But in the future, we could say, "Oh, actually, in this source, there is this special global ID that shared structs can use, and it's preserved across structural clone." The only thing is that that would only be observable then through shared structs, even if it's tied to module sources. so I, I was like the one hoping to be able to reuse these things for shared structs, and I believe we still can in the future. We still want to go with the direction, even with these changes. + +GB: That's a good point. Thanks for mentioning that. + +NRO: And for, for my other topic, I am generally supportive of this both of these changes. I this does simplify the mental model, and probably it also simplifies implementation, so it's good. just a note, my review of the request was not as detailed I was hoping for. I apologize I didn't have enough time for that. So I might find some like things, but from a first look, I took a like past few days, they were very good. + +GB: Sure. + +CZW: Oh, yeah. I really appreciate the work on the colli clarification made on the proposal, especially with the tuple request. But I, I think the pull requests were open last week, so it's a little bit late on the deadline, and I don't have much time reviewing on the change. But I, I the face was meaning on the module request key because, I commented on one of the pull requests. So I think the slides made this a little bit clear to me, but, I still the slides was made public. A few hours ago, so yeah. But anyway, I think I really appreciate the, clarification. On the pro, on, on, on the proposal. Thank you. + +GB: Sure. Would it help? I certainly appreciate that the content of this one was added late. If, if anyone has any concerns about that and would, would like to, push consensus back, I'm open to that. and CZW, if it would work for you to say that it has consensus given your sign-off on the final shape of it, I think that would be accepted anyway. I'm open to whatever works best, given the situation. + +CZW: Is this a question? + +GB: Yes. Well, I guess the statement is, we can say that it has consensus based on your editorial approval along with NRO's final editorial approval. And then the question is if anyone feels that is not a strong enough claim given the late materials on this one, please say so. And I can always bring this back for next meeting. + +CZW: Yeah, I think that would be great. I think giving reviewers a little bit more time to review the progress will be really appreciated. And we can come back and seek consensus on the normative changes again. + +GB: Sure. Yeah. okay. I will add it, for the next meeting, to do a request for consensus on these changes with adequate review. And then instead of seeking the consensus now. Sure. Let me just go through the last slide. + +GB: So, the, the other PR is the, the cross-roam case, which is just removing the cross-realm error. And I will defer the consensus on that one as well. If anyone has any further feedback, please contact me between now and the next meeting so that we can be sure that we've got all the feedback in for then. I'll move on to the next two items, very briefly. So the first one is a possible extension to the proposal. Which is feedback that came up, I believe, out of, Moddable's initial exploration of, of the source phase, but please correct me if I'm wrong. And the idea is, if we want a source phase import for built-in modules like, host modules, and if we would use the same module source or if we should have a different module source. and then you could also support this as a host module handle. and support it in different contexts. This one would obviously have an identity that is not related to the object because it would be related to the actual, built-in ID. So it would be distinct from the former change. but that is a fairly straightforward semantic for a, you know, either a synthetic module source or a, a built-in module source. So the idea is to either extend module source to apply to arbitrary synthetic modules. + +GB: So, so either extend. So we use the same module source representation. Or have a new synthetic module source representation. This would be a normative change. And there is no spec text but, this is also made easier to consider by the current round of changes. So, what I'm looking for is if anyone has any feedback on this use case or interest in this use case. It would be great to discuss it further. I think that's that one. Yeah. NRO. + +NRO: Is there anything in the current spec with or without notes requests that would prevent hosts from already providing their own built-in sources? + +GB: Yes, that's correct. Sorry. I should have brought that up first. so with, with all of this, with built-in module sources, because we allow host module sources, hosts can define any module source themselves. And can very easily define, just like a web assembly module can be a module source, custom module sources. So the idea of a built-in module source or a synthetic module source would be a streamlined version in the actual ECMA 262 spec, which would make it more natural and more, basically bringing that into, into ECMA 262 as something available, more generally for synthetic modules. but, but yeah, what the, the functionality already exists today normatively. It's almost a refactoring, but, but it certainly is, is a normative change as well. Yeah. NRO? + +NRO: Yeah. Just I think I'm, okay with the idea of having, like, synthetic module sources. If we're doing but for the host-defined modules, I'm not sure we should have that in the language. Mostly because not all places run JavaScript do have a built-in modules. Like, even just the web doesn't have built-in modules. So maybe it would be better, for example, for node to define its own node module source for its own built-ins. I-I'm not sure yet about what I prefer here. + +GB: Yeah. Yeah. This is just an early, seeking early feedback on this at this stage. So, yeah. I'm I don't have a strong opinion myself. MM? + +RPR: MM? + +MM: Yeah. Just, just saying that, since you're asking if there's interest, I am interested. That doesn't mean I have an opinion yet. just wanted to register that, very interested in this extension. In particular, you know, in the compartments, we're keeping our eye on trying to enable, code running on one host, using compartments to emulate other hosts. + +GB: Okay. It would be great to discuss that further. Thank you, MM. + +RPR: CZW? + +CZW: I think I should mention that this is part of the host integration. And the examples show that this is not just integration. And we should specify this before stage three as part of the host integration. am I understanding this correctly? + +GB: I'm sorry. I'm not following. Can you just repeat that? + +CZW: So is this a part of the host integration doc? + +GB: This would be a feature on synthetic module records? To allow them to and, and as I say, there's no clear spec text here currently. And, maybe it will be onerous to specify. but the idea would be to allow synthetic modules to define, a module source, or to get a module source for free, basically. + +GB: And, and we could make this optional as part of the create synthetic module source call. If, if they want to use it, they could opt into it. If they don't want to use it, they could choose to opt out of it. it yeah. Did th-that answer your question, CZW? + +CZW: Yeah. Yeah. + +GB: Okay. Thank you. + +CZW: And I think, another point that I would like to, state is I would like to see the host-host integration, definition for Node.js in particular because, I think in the slides, we have mentioned there are web integration and compartments, integration with the WASM module source, and how it how they should, operate between each other. But Node.js VM module is already out there. And Node.js is one of the major runtimes that implements the virtualization of modules already. So I think it would be great if we can clearly define this space before we go to stage three. + +GB: I guess one caveat I would add to that would be that we should not make stage three dependent on Node.js implementation work. because VM's API has a very complex surface area. And figuring out every edge case as a requirement for this spec would be an unnecessary constraint for this spec. But I think there are very simple ways that a module source can integrate into VM hooks. I-I think it's a very very straightforward thing to return a source in a hook. And support that in existing hooks especially with our identity model well-defined. So certainly let's have some more discussions to talk through that. But I'm-I'm hesitant to say that that should be a stage three requirement. + +CZW: Yeah. I'm okay with that if we can explore how the integration would be like. Because right now, I think it's still unclear to me how the general picture of the Node.js integration would be like, especially on the node VM module. + +GB: Sure. Let's have some more discussions around that. Thanks, CZW. Okay. And then the last topic I wanted to discuss both of the normative PRs were actually based on an editorial PR that does a module source refactoring finally. So this is something that we discussed previously about exploring whether we should properly separate the module source record from the module record. And if you recall, the first slide where you get the source text module record directly for the dynamic import it isn't necessary to do it because you can just get the instance record and then you're done. So it wasn't necessary for the specification. But exactly seeing the sort of questions that CZW has and for implementation questions, I think it would help at this point to think in terms of what it means to actually architecturally define a module source. And so a very simple formulation is just a non-normative editorial PR and instead of having this module source object that has a source text module record pointing to the source text module record we have a source record which has the module source and the host defined information. And we and it's that simple. It’s one slot on each. And then the host defined slot on the source record which in HTML, the host defined slot for those who don't know is the script itself. + +GB: And so because if we're defining identity based on the module source, object, we have the entire identity there already, right? So it's fairly straightforward. That's your key. And so when you get a host loaded imported module, you take a module source record. And then you return the module source instance. And then we can define in the host invariance, all the keying semantics based on this key. Because we have the key. Now. So that's basically it. So we can integrate those the-the well-defined keying semantics in the same way. We can define all the invariance. We have a simpler model with actually more form with a more formal definition. And that's this PR #56. so yeah. that's everything. + +RPR: So NRO says I like the editorial PR. End of message. CZW? + +CZW: Yeah. I also like the new pull request especially as that clears my original question about how the host would interpret the module source record state. Because this record originally links to a source text module record. And this record contains other state, especially like `import.meta` objects and other states. So I think the refactoring would make it clear that how the the host should clone the module source and transfer the module source. Thank you. + +GB: Yeah. I think it was always implicit. But it just makes it explicit in a way that makes implementation clearer. And thanks for confirming. It does look like that. The summary here is we've got the the-the two PRs which I'll bring back at the next meeting 58 and 61. And I will aim to do a short item to seek consensus on them having gone through the full editorial process and review process. Until then, we will pick up the discussions on built-in and I've- also referenced the existing HTML integration PRs which are all pending JS implementation progress. + +GB: Okay. Thank you. + +### Speaker's Summary of Key Points + +* Provided an reminder of source phase imports & Wasm imports semantics +* Covered the ESM Phase imports current specification semantics in detail +* Discussed and justified two normative changes 58 and 61 for the proposal + +### Conclusion + +* While there were no objections for the normative changes in PRs 58 and 61, consensus for these changes ie deferred until the next meeting due to the late details provided and to ensure adequate review. +* In the mean time, will continue discussions on builtin source, HTML & Node.js integration designs. + +## First-class Protocols Update + +Presenter: Lea Verou (LVU) + +* [proposal](https://github.com/tc39/proposal-first-class-protocols) +* [slides](https://tc39.es/proposal-first-class-protocols/slides/2026-03/) + +> Slide: History + +LVU: All right. So this is not a new proposal. This was started in 2017 by MF who's sitting right next to me. It reached stage one shortly after. There was an update in 2018. And then after that, there were crickets for like five years or so. Until November 2025 when there was renewed committee interest. And two new co-champions, me and JHD, and we iterated on it quite a lot. There's a lot of changes. And there's a big update today. We're not asking for stage advancement at this point. This is just today's agenda is just getting back up to speed with the proposal. What is the current status? What are the changes since 2018? What are some of the unresolved design challenges that we faced? We're hoping to get committee feedback on that on those, and hopefully unblock them. And the plan is to ask for stage 2 in a future meeting, hopefully this year. + +> Slide: Motivation + +LVU: So what is the motivation the original motivation behind this proposal? There's a lot of existing protocols that are kind of ad hoc right now.. The symbol-based ones all kind of live on the same Symbol namespace. So a primary motivation is to reify the existing built-in protocols in the language. And also give authors this expressivity. Allow them to describe and refer to their own protocols, their own interfaces. and once we have a standardized representation, they can be consistently introspected. They can be composed. And while most protocols in the language today are mostly around requirements, the this proposal goes one step further and supports both requirements and provided members. + +> Slide: Design principles + +LVU: So some of the design principles behind it are in general, we want to encourage symbol-based interfaces. There are many advantages there. There are obvious advantages to this. There should not be any coordination needed either between producers and consumers or across multiple producers. And symbols give you all the fun and convenience of monkey patching built-ins, but none of the downsides. They make it possible to monkey patch built-ins again without all the reasons that made this a bad idea. and also enable a more principled duck typing. + +> Slide: Real world use cases + +LVU: So what are some of the real-world use cases? I've kind of already alluded to some. There are already there's already a cornucopia of Symbol-based and string-based protocols in the language. Iterations, the canonical example, and more generally gen and generators. There are a lot of pro there are a lot of protocols to express capabilities to convert between different types of values like converting to strings, converting to primitives. Converting to JS how to convert this object to JSON. There's toString tag. around what will `Object.prototype.toString` return. Thenables, is concat spreadable and scopables and a bunch of less nice ones like the regular expression stuff, Symbol species. There's an array-like concept but it's never been formalized as a protocol. + +> Slide: Real world use cases 2 + +LVU: The web platform and the DOM also have set-like and map-like that use a bunch of objects. and having a first-class concept for this also enables a lot more protocols to be created. Protocols around mathematical properties of certain structures where this makes sense. Or structured clone there could be a structured clonable protocol. many other languages support ordered protocols equals protocols. from iterators, protocols to go from an iterator to a builder. We could create Symbol-based alternatives of the existing string-based protocols. And once we have standard objects for these standards protocol standard protocols for these, we could implement them on existing built-ins and provide them for authors to use in their own objects. We could even add new protocols for operator overloading. There have been discussions around that. There are a lot of use cases. Obviously the deeper hierarchy goes, the more kind of use cases emerge for this. + +> Slide: Real world use cases—DOM & Web Components + +LVU: EventTarget right now is a base class. It should have really been a protocol. It means if you have an existing hierarchy, you cannot have an event target. Having a protocol for that would solve this. There are other elements that are more th-there are other base classes that are kind of more questionable like HTMLMediaElement. But it would be more flexible as a set of protocols. Like now there's a bunch of HTMLMediaElement features that we want to add to the image element. And but we don't want to wholesale pull in the entirety of HTMLMediaElement. So we're just gonna add them ad hoc like, oh, let's add current time to it or let's add the controls attribute. If there were protocols for this, this could be done in a more structured way. There's a bunch of element capabilities that are just implicit and they're not described by anything. Like focusable, labelable, pop-over target, form associated. Web components need to pull in these capabilities on a case-by-case basis rather than altogether. And right now this is done through forwarding through this element internals object, which would have been much easier to manage as a set of protocols and would have provided a lot more flexibility. there would also in the web components, there could also be a lot of protocols for improved ergonomics like to manage CSS states, styles, templates, and so on. + +> Slide: Design circa 2018 + +LVU: So to refresh people's memory 'cause it's been a while, what did the design of protocols look like in 2018? Declaring a protocol looked like this. If bare property names declared a required member. These members look like string members, but they're they would actually create symbols behind the scenes. So you would implement them by either referencing the Symbol directly. Or and there then there was an imperative method as well. As the declarative syntax. And you could even have a grouped syntax as a class element. So in summary, the 2018 ma syntax-covered protocol declarations and there was a corresponding expression syntax. Members were declared either as identifiers, but they were actually implemented as symbols. And there was syntactic sugar for defining and using these symbols. Or you could just use them directly. Required and provided members were basically what type of member you had was basically determined by whether it had a value or not, accessories and methods were provided, anything else was required. There was an implements operator to check whether a-an object implemented a certain protocol. There was an imperative API for both of these. and it was very class-centric. It was integrated with class fields and there was a syntax for inline grouped implementations in class bodies. + +> Slide: Explicit required members + +LVU: And so what has changed since then? Many things have changed. The first thing is that required members are now explicit, which gives us more flexibility about what we can do for both required members and provided members. Back then it was very confusing to people that were new to protocols. Like it was unclear what were the required members, what were the provided members. Basically, if you just had a name, it was required. Otherwise, if you had a method or an accessor, it was provided. There was no obvious extension to adding additional constraints, which is which is the plan for the future. like right now it's only about presence. Required constraints are only about presence. Down the line, at some point in the future, we would want to be able to have additional constraints. And on the provided side, it was not easily extensible to providing actual data, actual data properties. Now there's an actual keyword, which we've currently called requires. abstract would be a valid choice as well. It is more consistent with other languages, but potentially less self-explanatory to someone who's not familiar with these other languages. There can be a case for either of them. Anything that doesn't have requires is provided, which also allows allowed us to provide regular data properties as well. And also one one reason for this was that required members are usually a lot fewer than provided members. So it iis not a big tax. And it helps a lot with clarity. + +RPR: LVU, would you like to take questions in line or at the end? + +LVU: We were thinking at the end because there's a lot of design issues that kind of relate to each other and affect each other. So we were thinking of doing that at the end. + +> Slide: ComputedPropertyName for literal member names + +LVU: So previously, there were only two types of members, either identifiers that generated symbols behind the scenes or quoted members. That added a literal string property, which meant there was no way to use external symbols. How do you express a protocol that can only be implemented on iterables? Like you could not require `Symbol.iterator` as a property. whereas now we use computed property names for declaring a literal name. So an identifier still creates symbols behind the scenes. Nothing has changed about that. whereas a computed property name means use this exact name literally. So if it's a string, it's using that exact string literally. It does not generate a Symbol. But it can also point to external symbols that are already available. Possibly from another protocol or existing well-known symbols. + +> Slide: Framing around objects, not constructors + +LVU: Another change is we've gotten rid of all the framing that was specific to constructors. Now the entirety of protocols is framed around objects in general. It still accommodates constructor use cases, just like it did. But all concepts are now generally around objects. So with the preview syntax, for example, there were dedicated static members. either as requirements or as provided members. Implementing a protocol on an object meant that all the direct members or all, all the direct protocol members were actually added to the prototype object of that of the object that the protocol was being implemented on. So it didn't work very well on instances. It didn't work very well on like ad hoc object literals. and it was only static members that were actually added directly to the object. Which also had the unfortunate effect that the ‘implements’ check returned false. On actual instances of the class. Whereas now there’s nothing to special case for constructors. It's all framed around objects. So implementing a protocol on an object adds its direct members directly on the object. And there are subprotocols for implementing members on an object that lives on a property of that object. So we dropped static in favor of subprotocols because you can just include static members on a constructor subprotocol. And the only difference is we still have the dedicated syntax for class heads. But it's now sugar for implementing the protocol on the class prototype. And as and that means that also the implements check on instances still will now return true. There's a bug on this slide. + +MF: Yeah. + +LVU: Anyway, hope, hopefully it's self-explanatory with that anyway. But yeah, that last line there should have been UC implements P, and it would have been true. So we also dropped some of the more special case syntax. Like we no longer have implemented by blocks in, in classes. Most of the protocols that we looked at had like one or maybe even maybe two members. The vast majority of protocols that we looked at actually had just one member. So the ergonomics advantage that this provided was marginal. And we figured maybe we can add it later, but it doesn't seem MVP. And we and we also dropped the inline implementations for existing classes from protocols. Like previously you could provide implementations for even built-in classes in your protocol. We also decided to defer that. + +> Slide: Protocol flattening + +LVU: One bit that was kind of undefined before that is now more well-defined is how do you implement multiple protocols on a class? Obviously, this has always this has always been the goal to be able to implement multiple protocols. But how does this work actually? Are these protocols implemented independently, one after the other? That means that if a protocol is partially satisfying another protocol, you don't get that benefit. The class still needs to satisfy the requirements of each protocol individually if it worked like that. So instead, the way we decided to do this is when you're implementing multiple protocols through a class head, they are first flattened into a single protocol. That includes the union of all provided and all required members. And then that union protocol, that flattened protocol is what is actually being implemented. And since we're gonna define that algorithm anyway, we figured we can expose it to authors as well. Via a method, which we called `protocol.union` for now. + +> Slide: Protocol introspection + +LVU: Also, previously there was always the goal to be able to access protocol members independently, either that was to create new protocols or to like to use them in any other way. But this was not defined at all. There was no way to do this. And it is non-trivial because what namespace do you use? The actual protocol object uses anything any property, any string property on that object could actually be a Symbol. So instead now we've introduced a `protocol.describe` function that returns an object literal that fully describes this protocol. And that can be fed directly to the new protocol constructor to create a new protocol. Obviously it can be tweaked in any number of ways to create a derivative protocol. And the object shape is TBD. We, we do have some ideas about it, but also depending on what we decide on the open questions, that will also inform what the shape of that object literal is. Is going to be. + +> Slide: Immutability + +LVU: We also decided this was also undefined previously. We decided the protocols are immutable. If you want to tweak an existing protocol, you can create a new protocol. And that is why they're so introspectable. And you can get all this information about them. So that you can create your own new protocols but you cannot tweak existing protocols. So if an object is satisfying a protocol, that can be guaranteed to continue to be true at least if the object doesn't change. + +> Slide: Reducing boilerplate + +LVU: A big tension in protocols is that their design avoids conflicts by using symbols. But when you actually implement them, on an object, sometimes you implement them just for their functionality and you use them internally and that's totally fine. You can just use the symbols. But often you want to expose this functionality as an API to your own consumers. And then you need to to, to create to write a bunch of boilerplate for example, a common case with web components with custom elements is creating form-associated custom elements. And there are a bunch of methods that are expected to be there for any form-associated HTML element. And if we had a protocol like this, you could implement it like this. Like all you need to do is wire up what is the actual value of this form control. But then you would need to add accessors to map all of these provided members to actual string provided members that you want to provide even though they might be exactly the same. But also one potential solution is if this protocol is defining strings. But that kind of defeats the point of protocols, and it moves control from the implementing object to the protocol. So it moves control from consumer to producer. But often the same protocol you might want to use in both ways. Like in some use cases the same protocol is used as an implementation detail. And in other cases the same protocol is exposed to consumers. So what if it were a property of the actual protocol to object relationship? What if it were a property of the implementation itself? + +> Slide: Auto-generated string aliases via factory + +LVU: And I originally thought maybe we can add some kind of syntax when you're implementing a protocol. Like maybe there could be an option or maybe there could be a keyword that says like add string aliases for all provided members. And it is an open question how to even name this concept. We called it withStrings for now, but it's such an awful name that we're gonna be forced to change it. But then. We realized a more flexible way might be to just have a function that takes a protocol and returns a protocol that has the string mappings. So then you it's it is still decided at the point of implementation. You can either implement P or implement protocol withStrings P. It adds a little bit more noise but not very much. It can be memoized. So you don't get a whole new protocol each time. Like the same input protocol should give you the same output protocol since they're immutable. And then you could even reason about them separately. You could check implements separately and, and it adds no actual complexity to the underlying protocol implementation 'cause it just sits on top of it. + +> Slide: Open Questions + +LVU: So this is the current state. These are the things we're kind of more confident about. Obviously everything's up in the air. But there's also a lot of things that we are less confident about and we don't quite know which way to go. And we were hoping to get feedback on these. So I'm gonna go through all of the open questions at once because they are very interrelated. They affect each other quite a lot. But this is this is a summary of them. + +> Slide: Are `“foo”` and `foo` distinct members ([#59](https://github.com/tc39/proposal-first-class-protocols/issues/59)) + +LVU: So the first one that keeps coming up in a bunch of design dilemmas is so right now we can have literal strings and we can have identifiers that create symbols. Should you be able to have a distinct string and a distinct Symbol member with the same name? 'Cause this complicates the API shape for anything that references these members. For example, `protocol.describe`. It's very convenient for it to return an object literal with a string key for every member and an object that has metadata about that member. Like what is the value? Is it required? And so on. You can't do that if these are distinct members. 'cause now they would map to the same string. Whereas if we say no, you cannot have both of them. then we can actually have this nice API shape. + +> Slide: Should `constructor` and `prototype` be automatically strings? ([#84](https://github.com/tc39/proposal-first-class-protocols/issues/84)) + +LVU: Also I remember we dropped static in, in favor of subprotocols. And like nested deep protocols and right now if you want to it is up to the auth it, it is up to the protocol author when they're writing a protocol for a class whether to define that protocol as primarily static methods and then prototype methods or primarily instance methods and then static methods. The standard recommended way is primarily instance methods and then the static members as a nested protocol on constructor. But here's the thing. By default the way the protocols work if you write an identifier of constructor or prototype these would create symbols. Like these would create `P.constructor` and `P.prototype` symbols. But a protocol is an instance of the protocol class which means that technically I mean it doesn't have to be that way but technically `P.constructor` should be pointing to protocol. And also there are uses for `P.prototype` that we're gonna see in a bit. So we probably don't want that. And it's also kind of a bug factory. Like you almost always need the actual strings in that case. So a very easy way to deal with that is to just say constructor and prototype are automatically strings. You cannot have these as regular Symbol members. The alternative is to just say you can't use these as identifiers at all. But if you're gonna have an, an exception anyway maybe we should do what people are actually expecting. I don't know. There's the argument that this might complicate the mental model too much. + +> Slide: Protocol inheritance + +LVU: Also there's the question of protocol inheritance. You very frequently need to build on top of existing protocols and add more things to them. So with the strings factor is an example of that. It takes a protocol as input and it creates a protocol that is a superset of that protocol. How do we implement protocol inheritance? Can people use extends, implements or both? Like implements would pull in other protocols prob multiple of them in the same way that classes can use implements to add to implement multiple protocols. Except it would work slightly differently from protocols because the protocol itself cannot satisfy another protocol except partially. Any, any non-satisfied requirements would be propagated to when that protocol is actually implemented. Extends on the other hand kind of implies a prototype chain—would it work this way? If we use extends only then it would be limiting if it only supports one protocol to extend from but it would be inconsistent with classes if it supports multiple. Or we could just have only implements and, and avoid that altogether. + +> Slide: How are parent symbols accessed ([#81](https://github.com/tc39/proposal-first-class-protocols/issues/81)) + +LVU: Regardless of how we're doing inheritance there's also the question of how, how to access parent symbols. Suppose you have a protocol that extends another protocol. Do people reference these symbols only through the original, the parent protocol or are they copied onto the child protocol as well? In some cases it is reasonable to reference the parent protocol because maybe it's a well-known protocol. In other cases the parent protocol or the all, all, or the multiple protocols that this might be pulling in might be seen as implementation details and maybe I don't want to expose them to my consumers. And a similar dilemma comes up with subprotocols as well. How do you access this bar member here so that you can actually implement it? You could use `Protocol.describe` or and depending on how we define that it could even be nested `Protocol.describe` methods and calls. But that is extremely unwieldy. It would be nice if there was better syntax. It cannot be `P.bar` because then what happens if P has a bar member of its own? Maybe it could use the foo Symbol because every subprotocol is also an implicit requirement as well. So it will create a `P.foo` Symbol. Or if we decide that you cannot have a string member and a regular member with the same name it could even be just a simple `P.foo.bar`. All right. So that's it. And let's go to the queue. + +MM: Okay. Good. So I've got a few questions. First one: are protocols themselves mutable? + +LVU: No. We covered this already. Protocols are immutable. + +MM: Okay. I'm sorry. I missed it. Okay. Where is the semantic state stored to enable you to do a, to ask the question at runtime does this class or does this object implement this protocol? + +LVU: So right now there was a strong consensus amongst co-champions that this should be done with duck typing. An alternative design is that you could store which protocols this object implements but I believe we there generally this was not . + +MM: Okay. Good. Those were the two alternatives I had in mind. If you do it with duck typing then if you, you say declaratively that you implement protocol P which provides a method bar and then later you remove the method bar from the class since classes are mutable the duct typing would imply that, that the class no longer implements the protocol which seems like a fine conclusion. Okay. + +LVU: Correct. + +MM: Okay. so, so good. Are these symbols that are created unique symbols or registered symbols? + +LVU: Unique. Unique symbols. + +MM: Okay. Those are my questions. Thank you for the clarifications. + +RPR: ACE. + +ACE: You don't have to go deep, deep, deep into this 'cause it's early and you're asking a stage thing. But I wonder what your thoughts are around on the clash with TypeScript here. Like TypeScript already has implements X which is erasable. Like for me I have like my TS blank space library if I see implements blah I remove that. You, you could say or you can look up and like what does that resolve to? If it resolves to a TypeScript type you remove it. If it resolves to a protocol you keep it. But that's like makes it way more complicated to do the TypeScript transformation if you knew you had to look up symbols. Right now I don't have to do any scope analysis, any Symbol tables, and no cross file. It's all purely local. Would you consider a different keyword? Just interested in your thoughts. + +JHD: Yeah. So a different keyword would be certainly worth considering although this seems to be the correct one and it already can't be named interface because of TypeScript. So it would be sad to choose a second subpar name for that reason. But also I'm pretty sure is are you just talking about the conflict in class heads? + +ACE: Yeah. Class blah implements foo. + +JHD: Okay. Yeah. Then I mean I would say the only way to resolve that if we shipped the implements keyword would be to know if the right-hand side of that keyword was in type space or value space basically. That would certainly complicate TS blank space. But yeah. If you have ideas I mean I think file an issue. + +LVU: Yeah. I think a good solution to that would be to find a nicer keyword that also makes sense like then why not? + +ACE: Yeah. I mean to, to add to I primarily I think it's a it's annoying technically but also for pe learning this like we've been ingraining for the last 10 plus years the implements is has no runtime semantics and then not, not only would we teach the tools but we have to teach humans and other things that write code these days what these mean. I'd be interested in exploring other keywords. I appreciate it's unfortunate but many things in JavaScript follow that attribute. + +DRR: So with respect to being able to determine whether or not something on the right side of an implements is actually value-ful, that's not a thing that most compilers apart from the TypeScript compiler itself can determine. Most compilers use only a single file's context at a time. So they don't do any sort of resolution to determine whether or not something is only defined in like type space or value space, meaning like JavaScript's binding space. So while it technically would be feasible for the TypeScript compiler a lot of people don't use the TypeScript compiler for compiling TypeScript. They use Oxide, Babel, things like that. Just build whatever. So I think that that is a bit of a challenge. + +DRR: I also have the next item. The other thing that I have I mean this was this similar sort of feedback that I had last time this was presented. But basically the fact that identifier named members don't appear as identifiers for the people actually implementing the protocol is a fairly confusing interaction. It is kind of a surprising thing that basically you know, you define something in the class with an identifier or an object literal or anything like that. That defines a member of that name whereas now you end up with something that you have to define through an accessed field in some way or you basically destructure it in, into some like first-class value locally. I think that that's definitely a foot gun and it's, and it's surprising semantics that you would have to define it as a string literal. It's not really a thing that we have anywhere else in the language. I think that also kind of I don't know if I want to keep time open to dis-discuss that specific topic. + +MF: I can respond. I understand your initial surprise at it because it's not how other syntactic constructs in the language work. They are strings. We feel it's just very important, like a core part of this proposal that we need to encourage Symbol-based protocols. We need that to be the easy and convenient thing to do. And that makes it worthwhile. We could explore what it would look like to, you know, call out the Symbol members in some other way. But, you know, from what we've looked at so far, that's the design that this champion group prefers. + +DRR: Under yeah. I do appreciate that specific point. We also got a reply from SHS. + +SHS: Yeah. I'll just call out that some tools actually make a distinction between identifiers and quote strings. For optimization purposes. So for instance, closure compiler cares about the two and treats them differently. So this would kind of break that sort of treatment for what it's worth. + +MF: Right. Closure compiler doesn't have support for protocols. + +SHS: No. It doesn't. I'm what I'm saying is it would, it would have a hard time dealing with this i-in the future if it would were to be part of the language. + +MF: I don't understand how. Wouldn't it just be a completely new thing? You're just saying you wouldn’t be able to reuse the same logic that you do for classes or something? + +SHS: No. No. What I'm saying is that the closure JavaScript is basically a different language that that treats quoted names, quoted property names as different from identifier property names in optimization. + +MF: In Object literals and classes? + +SHS: Everywhere. Yes. To the extent that if you quote a property it will never be renamed. If you use a dot identifier then it may be renamed. And that's actually a very important distinction. I'm not sure if closure's the only thing it does that but that is sort of a tooling consideration to think about. + +MF: Okay. Okay. Thank you. + +RPR: Just for awareness there's five minutes left on the time box and we are gonna have to stick to that. OFR is up with a re-reply. + +OFR: Yeah. Maybe I'm not fully understanding the proposal yet but like also with this point of why does it have a different syntax. even I, I even have I even struggle to understand the require because isn't like isn't that just a like in a not in most other languages it would just say function foo and that would imply that an implementation would have to implement this function to implement this protocol. So I don't understand why that needs a different syntax than a class literal. + +LVU: So `requires` is basically the members that you need to the members that the implementing object needs to have a to, to implement so that they can actually use that protocol. Like if they try to use a protocol to implement a protocol without having the required members that will throw it's basically kind of similar to abstract members. If you've used like any language that has interfaces like you could see this as the you could see the provided members as the default implementations and the required members as the abstract members, if that helps. + +OFR: Right. But I don't understand why it needs to look different than declaring a concrete superclass. to declare an interface. + +LVU: You mean why the syntax for required members needs to be different? + +OFR: Yeah. I mean why is it not just like a normal member but without a definition? + +LVU: So that is what you're right. So basically why did we make this change? So first off there was the, there was confusion when people looked at code written with protocols like it was not immediately obvious what was required and what was provided. Also down the line we want to add additional constraints, and there's many potential constraints you could envision. Like maybe a certain member needs to pass a certain test. Like maybe the maybe it's descriptor needs to pass a certain test or maybe it's value needs to pass a certain test or it needs to maybe it needs to descend from a certain type. Like there's many possibilities there. We don't know how this is going to evolve yet but we do know that we want it to evolve at some point. And this and being able to syntactically tell the difference between the two types of members gives us a lot more flexibility whereas otherwise for example right now with the current syntax you can have data properties. You can provide data properties. And that just set that just pro that just set a property to a to a certain value and that is a provided member. It is kind of weird if when you omit the value then suddenly the function of this member changes completely. 'Cause it still looks like a data property. Like in, in classes for example in constructors like you can't have both syntaxes and they don't change that dramatically. Like what, what this means. + +OFR: I understand it was a design choice. But I think I'm still unconvinced that this kind of abstract class looks so drastically different than a concrete class in syntax. + +RPR: So we have two minutes left. And there's quite a long queue. And a continuation tomorrow. + +JHD: I just wanted to add one thing. These are not only usable for classes. They're also usable for objects. So it would be weird if they had two close syntax to classes in my opinion because a code base with zero classes and, and many protocols would not expect that similarity. + +DRR: What I would say is consider a design where this proposal is deconstructed into like maybe three general directions that you can target. One is where you define new symbols that are off of a common object. You should look to enums for that. Another is being able to append existing members like these default members that you have. That would be something akin to I guess mix-ins or something along those lines if you want to use that terminology. And the other is being able to ensure that something actually satisfies an interface. You can lean on TypeScript for that for people who actually care. I kind of doubt that engines actually want to do the validation at runtime that this thing actually implements a type. I also think the people who want to use this feature are probably also using a type system to be honest. So that's my input on this proposal. Otherwise I think it's a lot of things wrapped into one direction which I understand. But I think those three decompositions would make things a lot easier. + +RPR: Thank you, DRR. I'm so we're at the end of the time box now. But this will continue tomorrow. So don't worry. We've, we've captured the queue. So everyone that has registered questions so far CDA you have a screenshot? All right. Thank you, LVU. In the remaining few minutes I think we have a special announcement. + +### Speaker's Summary of Key Points + +* Gave a quick reminder of the old protocols proposal status +* Introduced new champions' group +* Showed off the new design changes that have recently been decided upon by the champions' group +* Went through the unresolved design challenges +* Received committee feedback: + * MM cares that protocols are immutable (in that a implements test won't change if the object doesn’t change), that they use property tests instead of branding, and that the generated symbols are not registered symbols + * ACE and DRR are concerned about `implements` in the class head and how that would affect type-stripping tools that can't tell apart term space from type space + * ACE is also worried that people have learned over many years that implements in the class head has no runtime behaviour, and changing that could be confusing + * SHS and OFR are concerned about using identifier names for generated symbol members in protocol declarations instead of annotating them in some way + * DRR would like to consider splitting what's contained in the proposal into 3 parts: + * convenient way for mass-creating a group of associated symbols (enums?) + * a mechanism for copying groups of properties from one object to another (like Object.assign?) + * an operation to check that some description of an interface is satisfied by some object + +### Conclusion + +* (continued) + +## TG3 Security Update + +Presenter: Samina Husain (SHN) + +CDA: TG3 security update. TG3 continues to meet weekly to discuss the security impacts of proposals. Please join us weekly on Wednesdays if you would like. Some may be wondering why this item appears here in the schedule and not where it normally is. And that's because… SHN, over to you. + +SHN: Okay. Thank you very much. I wanted to take a few minutes before the day's over. We've had a few milestones. There's one very big milestone that we also have to recognize and I would like to share my screen. + +SHN: So I'd like to share my screen if I can. Give me a second. So a recognition award. So the reason I'm up here is I wanted to recognize RPR. So as you all know in the July GA of last year, RPR would receive the recognition from ECMA. and I was unable to go to the Tokyo meeting. And I wanted to share this award personally with the committee and RPR. So, RPR there's a beautiful citation which I will share when we're having cake about your exceptional leadership. your consistent support. Your ability to keep this committee running smoothly over the many years. And many more things. It is truly appreciated by the whole committee and by ECMA, the work that you have done together with your co-chairs but especially you and your work. So if I could ask you to join me here please. Oh, and it's RPR’s birthday. + +SHN: Rob, I have a birthday present for you and it is so heavy. I have to put it down. + +SHN: Okay. So I learned a few things about RPR. I will then pass the mic around and you all can make comments. But I learned not only that I've known him only a few years that he loves his color for shirts. Probably that brings that peace into the room when we do our work. He has this baseball cap. I'm gonna try to convince him to use this one in the future. This is for you. + +SHN: And I learned that RPR likes whiskey. I did not bring any liquid with me but I may have to think about that. I didn't know that you liked whiskey so much. But what I have is something else. It's quite special. So + +SHN: I'm not quite as tall as RPR. Okay. I'm gonna have to give you the somebody's gotta take the mic. And hold it 'cause I did break a wrist and it's the right wrist and it's very heavy. I need a volunteer to please hold the mic. I love this. Thank you. You know, it's like when you when you're when you have a fan fanning you when you're the queen. Okay. RPR. On behalf of ECMA and the whole committee please accept this award for your great efforts and your work. There you go. Congratulations. + +> Much applause + +RPR: Thank you. Thank you. It's beautiful. + +SHN: I hope you'll enjoy it. I hope you'll enjoy reflecting upon it. And remember all the good work you do and your committee. + +RPR: Yeah. This is wonderful. Thank you so much. + +SHN: I'll have to consider the liquid next time. All right. I am going to pass the mic around for others to make comments from your memories of working with RPR. I also want to share that we have done a few milestones today. Not only the recognition for, for RPR but also Temporal. So outside when we adjourn there is there's some Swiss chocolates. It's about a kilo. So it's 100 grams for every year of Temporal. And then the remainder of the grams left over is for RPR’s effort. So that's the 1,000 grams which is a kilo. And of course a chocolate cake which is apparently quite delicious from New York. So enjoy chocolates from Switzerland. And across the waters also from New York. So first a speech and then others. + +RPR: I was not prepared to make a speech. All I can say is that it's a pleasure to work with each and every one of you. I've seen from like first arrival I don't know what preconceptions I may have had but I know that, you know, everyone here is dedicated to making the language better. They truly mean it in individually and, you know, as opposed to the more, you know, the wider interests and so on that sometimes bring us here. So thank you all for caring so much. I'm here to support and serve and hope to continue as long as you want me. So thank you. + +> More applause + +SHN: We, we cannot not let Rob go away. And I'm sure. I also understand others. Okay. So my comment was we cannot let Rob leave without comments. And since I know others also like whiskey. They have got to be amazing stories. And I thought Aki, you may start with one. Since I know you've been on the table before. + +AKI: Call me busted. So I told SHN about the spreadsheet that the chairs kept that a lot of you probably don't know about. But there was a period of time where I feel like we had several meetings in a row where wherever we were it was nighttime for everyone. Like it just kept on happening where it was like evening for someone and night for someone else or something or like someone was traveling. And so we started, you know, having about half an hour before the end of the meeting we would between the three of us kind of quietly have a little whiskey toast. And we started tracking our whiskies and including our opinions. Then we when we started doing in-person meetings again we, we would have some whiskies and leave our notes and that included non-alcoholic whiskies, you know, to be inclusive. When I got rid of my Google account, the only thing I kept was our whiskey spreadsheet. + +SHN: Other comments about Rob while we put up that slide that I was unable to share. Good comments obviously. 'Cause there's nothing but good things to say. + +DRR: I just wanted to say that throughout the years on TC39 working with RPR has always been a pleasure. I know that he's always done his best to bring everybody together to try to make sure that the language is the best that it can be. And that this committee works the best that it can together as a chair. I think you've done a phenomenal job. And I think, you know, you're probably one of the most fitting people I can imagine to do it as well. Thanks for all that you do all the time. yeah. And I don't want to gush too much because then I'll start tearing up. + +SHN: That was nice. Thank you so much. I put up and I asked thank you so much for putting up the slide. So RPR, this is a citation that the committee gave to the general assembly on, on your behalf for the award. I've just highlighted a few words that I was trying to also highlight earlier about your work which is the outstanding and lasting impact that you have in the committee. The balanced solution, your leadership, your unwavering support. And of course you will have all of this we'll send it to you so you can then frame it next to your, your award. + +RPR: That's wonderful. + +SHN: Okay. We have cake outside. So you can all congratulate Rob outside also if you're ready for cake. But you're all ready to speak. I will stop talking now. It's, it's up to you all. + +CDA: Yes. Would anyone like to say a few kind words about Rob? We'll save the airing of the grievances for the dinner later. Please. + +JGT: One thing that I'm always aware of is that most of the engagement of the people in the world with TC39 is not in this room. It's online. It's elsewhere. And one of the things I always really appreciated is your hyping what, what we do online in various places. And, and I think that's really important because, you know, we can make all the great decisions in the world but unless the people out in the world know about it and unless they support what we're doing it doesn’t matter. A-and so I just really want to both congratulate you for doing that and encourage everybody to try to, you know, do what you can because most people are completely unaware of anything new that's happening anywhere. A-and so the more we can collectively do it and I think R-Rob, y-you're showing the best example of how to do that. Thank you. + +JSL: I’ve been involved with standards in some way or another for like 25 years. And, you know, W3C, IETF, you know, a bunch of them. This is probably one of the only times that I've seen a chair actually maintain a happy disposition. Out of all that time. So, you know, I think that, that really goes to credit just who you are. And leading us. So thank you. + +CM: So one of the things that's striking is, sort of echoing what he just said which is for somebody whose job is making the trains run on time and, you know, imposing order and discipline on a bunch of unruly and highly opinionated people. You've done this with an uncommon amount of kindness and decency. And that's much appreciated. + +JHD: So when I first started on the committee in 2014 we had a chair. That chair was a nice man. But he did virtually zero for the committee. His job was to be a warm body that wasn't employed by Google, Microsoft, Apple, or Mozilla. And he did that job. He did that job very well. and so when we started when he retired and we looked for other chairs I was always kind of like, "Huh, I wonder what we need a chair for. We're doing kind of okay." Like we manage ourselves when we yell at each other and, you know, like I think we I don't even think we'd started using the cueing software yet. So things were kind of okay. But, and then we had another chair for less than a year that didn't work out super well. But ever since then I think that the chair group as a whole over time has done a tremendous job managing this very fast-growing group. Like I think there were less than 20 people my first meeting and now like in the room we have 30, 34 I think. And there's like 80 people that could be calling in or attending so yeah. I just and then echo what everyone else has said in like about RPR’s kindness and, and sort of happy disposition while managing all this stuff he's also an engineering manager. So he's also dealing with that stuff which is probably not stress-free. And on top of that he has for me personally stepped in to mediate a conflict with another delegate very gracefully and elegantly in a way that I would certainly not have been able to do if I were in his shoes. And it's quite impressive. So I think I, I hope that you do remain around for a long time. + +KM: I was gonna say basically the same thing that JHD said but I'll repeat it in my own words. 'Cause I think it deserves restatement just because of how impressive it is. Yeah. I mean, I distinctly remember the let's call it the, the pre-RPR era of TC39. I distinctly remember there being huge fights, people storming out of the meeting room, meetings where delegates quit and never came back. Just bad stuff. And that doesn't say every meeting was bad but the I have yet to see any of those since then. In the, in the, in the now RPR era. So I think that deserves just credit in and of itself. I mean, it's, it's, it's, people here have strong opinions. They're very passionate about, you know, how JavaScript should evolve. And the, you know, the ability to steer through all of that, keep everybody happy, keep people coming is in my opinion as remarkable as the fact that anybody actually has a working JavaScript implementation. Maybe more impressive. So yeah. Thank you. + +RPR: Sorry for, for people not in the room. I just want to say that means a lot. But also there were many people taking steps to change the way the committee operated for the better. I do I think a lot of it comes down to BT introducing TCQ as well as the code of conduct committee and the inclusivity committee that we had. So yeah. It's been a very group, group effort but yeah. It's important. All right. Is it cake time? + +SFC: It's hard to add to what everyone else has already said but I just wanted to reiterate how I think one thing that you've done as chair is really make the, this, this standards body very inclusive. It's very, it's always been very welcoming. I think that it really helps that we've been able to bring on, you know, people from all different a-all different areas of JavaScript, all different, all different types of, of experience in this field. And then have us all be able to work together. I think that you've been really doing an outstanding job in making the body being, being a welcoming and inclusive type of environment that makes it a pleasure to continue to contribute for, for all of us. I think that's, that's, that's, that's great. And yeah. Also I'll everything that you've done to you know, balance the transition into COVID and then out of COVID in terms of like in-person meetings and, and hybrid meetings. I think you also handled that situation very, very well. And I, you know, look forward to continuing that. + +RPR: And thank you, SFC. So I did want to say something you reminded me of there is that I feel like I didn't know this stuff before I came in. I learned it from the other people who are here. Like MBS, like AKI in particular. USA and CDA, they teach me all of these things. So it's a shared effort. + +JKP: I just wanted to say that I think it was cool that SHN learned that RPR likes hats and whisky. But getting to work with RPR, I know that he loves praise and he has never been happier than when he's in the spotlight. But I did sincerely though want to say that you're the reason that I joined TC39. It's been incredibly fulfilling for me. And I do agree with a lot of the earlier statements. Like your kind of outreach and inclusivity and like keeping, getting everyone involved has been really impactful and really meaningful. So thank you for that. + +CDA: All right. I don't want to keep torturing you but I feel like I should say something. You know, I don't want to repeat what everybody else has already said. But of course I have an additional perspective than most folks because I am on the chair group with you. So I think all of these same qualities that folks are bringing up, he also brings to the chair group. And what I mean by that is like, you know, the chair group meets weekly. All the stuff we're doing behind the scenes. Whether that's things like conflict resolution or beating planning or everything in between. And Rob is obviously, you know, a huge part of the chair group. I could not imagine him not being there. I secretly am deathly afraid that one day he will leave the chair group and I don't know if I would be able to continue on. so really happy that he's here and doing this role. thank you. + +RBR: So one memory that I have about us and together is actually in Spain. And like I find your shirt is very fitting in this case because it is like a very nice atmosphere that I remember us just being at the beach and enjoying the time there. And honestly speaking, like where I before I joined TC39, I heard also all these horror stories about how it has been earlier without knowing anything in person. Being here I must say I have never ever in my life I believe seen a group of this size with lots of strong opinions where there is not someone speaking loud or anything like that. It's like the atmosphere of your shirt is the atmosphere of the whole room. And this is kept all the time. And this is amazing. + +CDA: All right. Have we put Rob through enough? + +SHN: Yes. Can I have the last word? I get the last word. + +SHN: Thank you, everybody, for your comments. They are all so wonderful and meaningful and Rob is wonderful. I wanted to thank everybody also online. I'm sorry that you're not here with us. We are gonna celebrate with cake and chocolate. And there will be a social dinner this evening in which you may further celebrate so we will miss you for that. But we will be on time tomorrow. So on that note, Rob, and everybody, let's have some cake. And so let's continue the celebration or Rob, would you like the last word? Rob, cake. Rob wants cake. Thank you very much. + +CDA: We are. Just to be clear. The cake is also for Temporal. I think it might actually be exclusively for Temporal. But plus RPR’s birthday. So there's that. So yeah. Can we get one more round of applause for the combination of Temporal stage four and RPR’s ECMA recognition award? diff --git a/meetings/2026-03/march-12.md b/meetings/2026-03/march-12.md new file mode 100644 index 0000000..d3275d9 --- /dev/null +++ b/meetings/2026-03/march-12.md @@ -0,0 +1,1054 @@ +# 113th TC39 Meeting + +Day Three—12 March 2026 + +**Attendees:** + +| Name | Abbreviation | Organization | +|--------------------|--------------|--------------------| +| Samina Husain | SHN | Ecma International | +| Jeffrey Posnick | JPO | Bloomberg | +| Waldemar Horwat | WH | Invited Expert | +| Chengzhong Wu | CZW | Bloomberg | +| Dmitry Makhnev | DJM | JetBrains | +| Richard Gibson | RGN | Agoric | +| Olivier Flückiger | OFR | Google | +| J. S. Choi | JSC | Invited Expert | +| Linus Groh | LGH | Bloomberg | +| Lea Verou | LVU | OpenJS | +| Gustavo Tonietto | GTO | Mozilla | +| Ron Buckton | RBN | F5 | +| Stephen Hicks | SHS | Google | +| Evan Winslow | EBW | Google | +| James Snell | JSL | Cloudflare | +| Guy Bedford | GB | Cloudflare | +| Dan Lapid | DLP | Cloudflare | +| Steve Faulkner | SFR | Cloudflare | +| Eemeli Aro | EAO | Mozilla | +| Matthew Gaudet | MAG | Mozilla | +| Jason Williams | JWS | Bloomberg | +| Istvan Sebestyen | IS | Ecma | +| Philip Chimento | PFC | Igalia | +| Andreu Botella | ABO | Igalia | +| Ashley Claymore | ACE | Bloomberg | +| Aki Braun | AKI | Ecma International | +| Daniel Rosenwasser | DRR | Microsoft | +| Justin Grant | JGT | | +| Jordan Harband | JHD | Socket | +| Justin Ridgewell | JRL | Google | +| Kevin Gibbons | KG | F5 | +| Keith Miller | KM | Apple | +| Mathieu Hofman | MAH | Agoric | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Nicolò Ribaudo | NRO | Igalia | +| Peter Hoddie | PHE | Moddable | +| Ruben Bridgewater | RBR | | +| Rob Palmer | RPR | Bloomberg | +| Shane Carr | SFC | Google | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +## First-class Protocols Update (continuation) + +Presenter: Lea Verou (LVU) + +* [proposal](https://github.com/tc39/proposal-first-class-protocols) +* [slides](https://tc39.es/proposal-first-class-protocols/slides/2026-03/) + +LVU: Okay. Is, yeah, it's definitely on now. Just going to qu-quickly go through the slides I presented yesterday to remind people what this is about. What remains, especially the code examples, this was the design around 2018 and what was changed? Members are now explicitly required through requires. One of the sort of bike-sheddy questions is, should we use requires or abstract here? Then we switched to using computed property names for literal names, which also allows us to use external symbols. So you can actually use symbols from the outside that have been defined somewhere else. That was a question I got yesterday at dinner. You just use the bracket notation, so the computed property name. + +LVU: It is very much framed around objects. There is nothing constructor-specific anymore. We dropped some of the syntaxes for implementing a protocol. We pruned it down to the essentials. Yeah, flattening. We fleshed out introspection. A bit more. Protocols are immutable. I got a question about that as well. this with strings concept is kind of open up in the air, both how to name it and also, like, whether this is the right direction, whether, a factory method is the right direction. I got some discussions last night at dinner that having a dedicated syntax for it could actually enable certain optimizations that this wouldn't. + +LVU: And the open questions, should, strings and me identifiers with the same name be considered distinct members? Which affects a lot of the other questions. Constructor and prototype, should they be errors? If used without quotes, should they be automatically strings? Like, we know what the author's trying to do. Why? Why throw, how protocol inheritance extends implements both, how to access parent symbols, how to, similar question about sub-protocol members. Do they create symbols that you can access from the parent protocol? Or do you just have to keep calling protocol describe, which is very awkward? Yeah, those are the open questions. But obviously, everything around the syntax is also, for debate and feedback. + +RPR: I guess, this is the one where we did actually have a cue from yesterday. + +LVU: Oh, right. + +RPR: Is that in the notes? I assume that's normally where we put the queue items. CDA said that he took a screenshot. But it's not clear where that was put. Sometimes it's put in the delegates' chat. All right. Let's just see if we don't think it's in the notes. Okay. All right. So, unfortunately, CDA may just have it on his local machine. Yeah, if so, if you do recall having a question or comment from yesterday, please put yourself back on now. EAO? + +EAO: So yeah, I do remember this because I was actually the next one up, from yesterday. My sense is that one of the premises of this proposal as I understand it at least is that it is possible for a, a class or an object to implement multiple protocols at the same time. And, I've not looked into what you've said about how this is done. But, if I recall from yesterday, LVU, you mentioned that you were able to kinda combine all of the parent protocols into a single sort of, combined thing. And for that to be the protocol that then the class is implementing. So if this is possible for protocols, I would presume that the same operation would actually be also possible for parent classes. So would it be possible to extract from what you're proposing here a proposal to allow for a, a class to extend multiple classes using the same logic that you're proposing that is used for protocols? Effectively? And only do that part? + +LVU: One thing that hasn't changed, is that protocols are being applied by copying their members onto the object they're being implemented on. They do not use the prototype chain. And basically doing that for classes is exactly what the mix-ins proposal is for. That is exactly what it does. And yeah, that is a separate proposal which, our understanding was there was some committee pushback against. + +MF: Yeah. Yeah. You, you just can't do it for classes because you only have one prototype chain. It's different when you're copying like this. + +EAO: But could we create the same sort of meta-parent, class the as is proposed it happens with the protocols and have that be the parent of the class that we instantiate? + +MF: You would have to resolve how their prototypes are used to resolve lookups as well. That is a thing that you can do. It sounds like the thing that you're asking for does very much overlap with the thing that was in the mix-ins proposal. But that's not at all the direction that this proposal's going. + +LVU: Also, you don't have enough information to do this for classes, like, from the outside at least. I suppose implementations could do it, but yeah. Anyway, that's a separate thing. + +EAO: So is the difference that protocols don't have parents? + +LVU: They do have parents, but that's resolved on the protocol side. And also how that their parents, how that works, is also one of the open questions. Like, it's very clear that we want protocol inheritance in some shape or form. Does that use the prototype chain? Or does that also use a protocol mechanism? If it uses the prototype chain, it's basically like we're trying to solve this problem of having a single inheritance and we're, we're also having single inheritance here. That is kinda weird. Yeah. + +SFC: Okay. I had a question. Which is that in the slides you gave examples of, built-in protocols, I have a proposal where we're doing a built-in protocol. Is the idea with this proposal that, like, we will have protocols like, like, we will define a bunch of, like, protocols for all of the built-in protocols, and then in addition to that, users can create their own protocols? Or is the idea here more that users can use this mechanism to create their own protocols and then implement it on all the types that they want, but that built-in protocols will, like, remain the way they are? + +LVU: So there's absolutely an intent that implementations would provide their own protocols. And basic and one of the goals is to reify the existing protocols. Do note that most of the existing protocols that we have are around requirements and not provided methods we thought that provided methods is the provided members is the natural next step. Like, for example, if we were defining `Symbol.iterator` when after protocols had existed, we may have wanted to also add provided members like forEach. But that was not a thing back then. Do you want to? + +MF: Yeah. If you look at the slide, we have a slide that we presented at the beginning of the presentation. These are just some of the built-in protocols that we think would be nice to actually represent as protocols once we have that feature. + +LVU: Although we don't want to have a rule that every single Symbol well-known Symbol that we add needs to also correspond to a protocol object. It should be on a case-by-case basis. Like, does adding a protocol add value. Like, I don't think if there's just a single required member, that would be particularly useful. But if there are provided members or if there are multiple requirements, it does simplify certain things. + +SFC: Okay. I understand that this slide is giving a list. I just wasn't clear on, like, is there going to be a thing called, like, protocol .iteration? That's like a new global in JavaScript that, like, you can implement, or are, are these just, like, examples of, like, motivation? + +MF: I would really like for that to be the case. Whether it's done as part of this proposal introducing protocols themselves or as a follow-on right after, I would like to pursue that, yes. + +RBN: So this came up yesterday in regards to the possibility that `implements` conflicts with TypeScript. In the C# language, when you have a class that may have a superclass and zero or more interfaces, they are placed in the same location after the colon (where we might use `extends`) separated by commas. So you can say `class C : B, ISomething, PSomething` using a combined list. In a way, it looks like multiple inheritance, but interfaces are kind of multiple inheritance and you may run into some of those complexities when you're dealing with the possibility of implementing multiple protocols that may have default implementations. So there are things you'll have to figure out if you use a combined syntax like this, but it does avoid the issue with potentially clobbering TypeScript syntax. I was discussing this in the Matrix with JHD yesterday and his concern was that you could only really have one actual superclass and would have to throw at runtime if you specify more than one that is not a protocol. But that's actually consistent with how C# does this, where C# only allows you to have zero or one superclasses, which must come first in the list and then any other items in the list must be interfaces. So that might be a feasible option. + +MF: Yeah, yeah. So, when you initially showed this yesterday, I thought, like, oh, this is not feasible for this syntax space. But actually, the syntax space is available. I think you should have mentioned that explicitly. Yeah. It's currently just a left-hand side expression, so we could put a lower precedence expression there. And actually make that work. That's an interesting suggestion. I like that. + +RBN: I will also point out that in the ecosystem, when it comes to mixins, you will often see the approach where there is something like a `mixin` function used like `C extends mixin(B, M1, M2)`, so that's not too far afield from what people are expecting. + +PFC: So my question is around the Symbol keys in a protocol kind of looking like string keys. As a programmer looking at this syntax for the first time, that's very confusing to me, and it requires a mental leap to get used to. I understand the logic that got us to this place. So I don't want to say, "I want this to be done differently." Maybe this is the right solution. But I'm wondering if you could talk a bit about the design space and whether there seems like there might be a coherent solution that would make string keys look like string keys and Symbol keys look like Symbol keys. + +MF: I understand the apprehension here. Somebody else yesterday mentioned the same thing, that it's unintuitive if you're familiar with class syntax. And we are mirroring much of the aspects of class syntax. So, like, why doesn't it have the exact same semantics there for that? It is very important for the proposal to encourage the use of Symbols in the protocols you create. And for that to be the most convenient thing to do because we expect that that's what most people who define protocols will want to do, in most cases. Does it have to be through this particular syntax that we do that? No. But we still want to get to that point where it's really centered around Symbol-based protocols, existing or new, and that there are escape hatches for, like referring to externally defined symbols or referring to string keys. So I'm open to suggestions for what you think would be a better way syntactically to do that. But for now, like, that's the trade-off I had decided to take. And so far in the group, we hadn't decided that it was not worth that trade-off up to this point. But thank you for the feedback. + +LVU: And there are many, good things that we get from this. Like, you can apply a protocol to existing objects without worrying about conflicts, which was the primary motivation for using Symbols. That said, I think there might be value in a syntax that lets you define a protocol as being string major, and, like, then you have to opt out on a case-by-case basis and define that, no, this is actually a Symbol. Whereas right now, you're defining symbols, and you have to opt out to define literal strings. And this is just my personal opinion, not something that we've discussed in the champions group. But I would personally see value in something like that. 'Cause there are basically these trade-offs that you need to balance. Like, on one hand, avoiding conflicts. On the other hand, ergonomics. And right now, that is the sort of balance that we've ended up on. But there are many other potential options there for balancing these trade-offs better. + +SFC: Most examples of protocols currently use strings. And that's kind of the approach that a lot of the ecosystem has, has been relying on. And, like, if you have a case where there is a protocol that conflict because they have the same function name, then, like, why do they necessarily always need to have different behaviors? And if they are different behaviors, then that's kind of maybe a bad way to name your function. Like, if you have a function called length, then, like, it should return the number of items no matter who's defining the length. I shouldn't have to redefine the length function five times or five different protocols that I'll need a length function. + +LVU: There are many reasonable ways to implement the same function. Like, as we've seen many times in the past when we were trying to define functions that the ecosystem had already defined in a slightly different way. And also, in some cases, it might be Symbols give you a way to also disambiguate like, suppose you include multiple protocols that all define connectedCallback for a custom element. You can write your own connected callback with an actual. Name and pull all of these together. + +KG: As a concrete example, there's like a dozen different things a method called "render" might mean. + +SFC: If we consider that most of, like, TC39's built-in protocols are still mostly string-based, then, like, and we want the ecosystem protocols to be Symbol-based, then this kind of decision could make some sense. And then, like, in TC39, we go through the trouble of, like, defining the string-based protocols, and we don't really like, we're not the market for the nice syntax. Right? But then if we want to encourage the ecosystem to use string to use Symbol-based protocols, then like, in some sense, that, that, trade-off kind of makes sense. I do still share PFC’s concern about, like, class syntax is string-based. And that might be confusing for developers. I think it might be worth doing, like, a user study on that. But also, like, I don't know if most users really are aware of the difference between s between, like, string-based functions and not so it's probably fine. But it would be good to have a little bit more of a stronger signal there from users. + +EAO: So, so my sense here is that, proposed syntax for defining a protocol looks an awful lot like the syntax we are using for defining a class. And in that syntax, we already have an existing way of saying this is how you define a member property of this class which has a string key. String identifier. And we have a syntax for saying how do you define it to have a Symbol identifier. And I don't see why this should use a different syntax or use the same syntax to mean something else. That seems like just a way to, to get to mislead developers. And we shouldn't do that. + +RPR: OFR has a plus one to that, end of message. And, I've got 10 minutes left. Three on the queue. + +LVU: One thing I wanted to say is that we don't have a concept yet of generating symbols. So the existing syntaxes for defining a class member with a Symbol name, that already works. If you already have the Symbol. But we don't have a-any syntax right now for generating symbols. + +LVU: There is no syntax for doing that in classes. There is no syntax for doing that in classes right now. I mean, there yes, there could be new syntax, but right now, there isn't. + +ACE: Yeah. So it's and yes, we don't have syntax. We could add new syntax. This proposal's already adding a lot of syntax. It seems I agree with EAO. Like, using something that looks like a class but is going to be different. Like, because otherwise, we'd have to add new syntax. It doesn't follow to me when we're adding new syntax anyway. And if the feature would be useful in classes too, so. + +RBN: I have a different take on the Symbol versus string discussion. In C#, interfaces are strict, much as what we're trying to propose by using Symbol names for the protocol. One thing that's interesting in C# is that they have this concept of explicit versus implicit interface implementation. With explicit implementation, the syntax explicitly calls out which interface method signature this method is implementing (i.e., `void IDisposable.Dispose()`). Whereas with implicit implementation of an interface, you just name the method the same as the identifier spelling of the method signature (i.e., `public void Dispose()`). Something similar could be applied to this, where the protocol will look at the object on which it's to be implemented and if an explicit method using the Symbol is not found (i.e., `[MyProtocol.foo]()`), but a method with the same identifier spelling is found (i.e., `foo()`), it could create the explicit Symbol-named method on the object as an alias for the identifier-spelled version. Since this explicitly-named alias exists, it would then pass for any `instanceof` or `implements`-like check that validates the symbol-named methods are present. + +(Transcription drops for ~30 seconds.) + +RBN: I think that in a way solves a lot of the issues. You don't need specific syntax like a `requires` to say that it has that thing. It is just that by default having that member says that it has to be there. Instance checks would just be checking for the specific Symbol names because they would exist because it aliased it when it was added. And I think it solves the confusion around members versus pre-implemented methods because those just are methods that must exist, and if they don't, they get added. So I think that a potential solution for those cases is to look at something like that approach. + +ACE: Yeah, so for me personally, I'd love to see more use cases that are not more in terms of, like, a number, but more in terms of ones that I would find more convincing, like so the if you go to slide six, like the it lists things which are protocols, the real-world use cases, but yeah. Two more. Yeah, so this, this does list things that are protocols, but I don't think anything this list motivates Protocols,, like a lot like definitely like Symbol is concat spreadable. Symbol's unscopable. It's like the use cases for those are fleeting. And obviously Symbol species definitely not. And, things where it's just a single string like you can just implement that method. There's not a huge benefit to saying.. + +MF: Have you seen slide seven and eight? Like those were more motivation. + +ACE: I'm going to go so like the first slide, I'm just going to the first slide, to me is not convincing to me because I thought like those things are either too niche or already currently served well enough. The next one I love category theory. I find it really interesting. I don't know if the JavaScript community is ready for category theory at large. And it would be interested and again, like structured clonable, I'd be really interested in a proposal around structured clone and customizing that. But that proposal doesn't exist yet, so it feels like we're putting the cart before the horse. Make sure I get that. And then for the DOM ones, I just I don't write, HTML much anymore. So it's hard for me to latch onto these. So if there are ways of making either going into these deeper like assuming a lower base knowledge, of the pain here, or another domain that maybe I'd, I'd really appreciate that. + +RPR: Okay. We covered both yours, ACE? + +ACE: Oh. I forgot this. I'd also appreciate like we talk a lot about things like Java. And that has similar things to this, but like I find again, I'm not massively convinced when we compare ourselves to like statically typed languages because JavaScript isn't statically typed. I'd be really interested if there are other like dynamic languages like JavaScript that have something like this because a lot of these protocols, when I'm looking at them, I'm like, that on its own what I'd really want to see is like TypeScript to have support so you can actually describe not only you have this thing, but this thing should take a function, a callback that takes these properties and return these things 'cause a lot of these protocols just the name is only the surface. What's more important is the types. And this without the types again, it feels like it doesn't pull as much weight as it would in a statically typed language. + +RPR: Okay. Two in a bit of minutes remaining, two in the queue. JSL? + +JSL: This might just be a factor of me just not having the right mental model. Why isn't this just an interface? Why, why protocol? + +LVU: Because TypeScript already uses that term. It is basically an interface. + +JSL: Okay. But why is it since we already have a syntax there, why are we saying it calling it something else? + +MF: The last time that this was presented, the TypeScript team objected to it being called an interface. It was presented as an interface. Yes. I am right there with you. + +LVU: So we'll split it. + +RPR: Shane? + +SFC Yeah. MF will answer this a little bit on matrix, but it seems like this pro besides the keywords, this protocol, this, is implemented implementable in a polyfill, which also means that the main motivation of the proposal is that we want people to use the Symbol-based protocols more because like what they can do right now, they can do we just want to make it easier for them to do it. Is that right? + +LVU: Yes. + +RPR: I think that's the queue done. And you've about 30 seconds left if you wish to say final. + +MF: Thank you, everyone. I really appreciate the feedback. I think this was a really constructive discussion. Yeah. I'm happy with how it went. And I'm looking forward to doing more design work and coming back with our updates later. + +### Speaker's Summary of Key Points + +* Gave another quick run through of the presentation +* Received committee feedback: + * EAO is interested in multiple inheritance for classes, but the champions' group was not interested + * PFC is concerned about identifier names being Symbols as well + * RBN wants to explore `extends ParentClass, ProtocolA, ProtocolB,…` notation in the class head + * SFC claims most protocols are string-based, not symbol-based, and is skeptical about the protocols design being symbol-first + * EAO is also concerned about IdentifierNames being symbols + * ACE thinks we can explore a notation for generated symbols for class elements + * ACE would like to see the motivating examples expanded on more for people who lack background in each respective domain + +### Conclusion + +* Champions received feedback to help inform their design on the way to Stage 2. + +## test262 coverage strategies + +Presenter: Richard Gibson (RGN) + +* no proposal +* no slides + +RGN: Okay. So this is going to be less of a polished presentation and more of a show and tell. And I wanted to start with an explanation of the problem that I'm looking at. Let’s look at the algorithm for a relatively simple built-in function, `Array.prototype.pop()`. It has 10 steps, one of them implementing an early return. And as some of you may know, when you hit U, on the spec viewer, you get to see calls to user code. Which is convenient in this case because it highlights every time that user code can run, each of which is a potential source of throwing an error. And this function also has one additional point where an error can be thrown right at the top. It's deterministic. There's no implementation-defined behavior here. There's no hand-waving here. And in test262, we have a goal to cover all such details. + +RGN: The normal strategy for that would entail something like finding all steps where an error can be thrown, and creating a test file for each. In this case, I haven't even pulled up the test262 directory. Because I know that one aspect which isn't tested is the relationship between these exit points. So we've got first a check on ToObject, then a length determination, then if that value is zero, a Set of “length”. Or if that value is not zero, we jump into these later steps, where we do a Get of the index corresponding with the element that's about to be removed, delete the corresponding property, and finally a Set of “length”. All of these actually have to happen in order for an implementation to be conforming. You can't, for instance, set length before you delete the property. test262 right now, as a general rule, does not cover this. And for more complex functions, we actually do see implementation divergence. I don't expect it for `Array.pop()`, but it definitely does come up elsewhere. And I'd like to resolve that. + +RGN: So I have been playing around with what testing would look like if we addressed such gaps. And basically, what I came up with is something like this. A test progress helper allows us to take as input what I'm calling for lack of a better term right now, variations on the different aspects. Variations on the different elements that constitute inputs to the function being called. testProgress will accept those variations and invoke a callback for each specific selection, along with the expectation of what should be logged, and what exception, if any, should be thrown. I'll come back to the definition of what these variations are, but I just want to walk through a callback body right now. + +RGN: So for testing Array.pop’s first return (the consequent when length is zero), the callback is going to receive arguments for what should be logged, what should be thrown, how to make the method receiver, how to make a length getter for that object, and how to make a length setter for that object. For this scenario where we are calling ToObject, getting a length off of that object, seeing that the returned length is zero, and then performing a Set of “length”, that's all we need. + +RGN: And the body of this callback basically just looks like a definition of what should be thrown based on the input. Definition of thisArg along with a descriptor for “length”. So what we want here is that the length getter just calls the corresponding lengthGetter argument and then allows us to do a set. The length setter verifies that that has happened and disallows further invocations (so it's a call-once), and then calls the lengthSetter argument. Those two constitute our descriptor, and then we send them into the receiver maker to get thisArg, which we then use in an `Array.prototype.pop()` call. We expect in most cases to catch an error, so we're looking for that here. And then, if no error was expected and none was caught, that's a successful case so we can go ahead and push into our log what we expected to get. If an error was expected, and what we caught is an instance of the expected constructor, then that's also a successful case. Anything else is unsuccessful, so we push into the log a value that won't match expectations. Following that, we characterize the case that just ran. Here, I'm using `JSON.stringify()`, but I think probably we'll do something different if this actually makes it to maturity. And then finally we run the assertion that the observed log matches the expected log. So this covers all the calls that were produced along the way, along with that final exception for a thrown error or lack thereof. + +RGN: I said earlier that I would come back to the definitions of these inputs. And I'll do that now. For variations of the receiver, we start with values for which ToObject will throw. So that’s `undefined` and `null`, each of which should throw a TypeError. And these variations are ordered—I should go into that theory. We have all of these inputs, and we want them to correspond to an order where we start with every single input being “bad”. Every bad input should cause the algorithm in question to terminate abruptly. And when they are all bad, we expect that the first one will be the source of the thrown error. So that constitutes our first selection of cases. And then we can start to vary the selection. So first run with all-bad variations, expecting that the first input produced a problem. Then modify that first one so things get a little bit farther. If you're out of “bad” cases for an input, getting a little bit farther means the next input will be relevant. So actually, that's the case for receiver. If it's undefined or if it's null, those are hitting the error cases in ToObject. We're going to encounter a problem. The final variation for receiver is go ahead and make me an object. We'll get past step one in the `Array.prototype.pop()` algorithm, and uncover the problem with subsequent inputs. Which in this case is the getter for “length”. So undefined receiver gives us a type error, null receiver gives us a type error. That's this expectThrown field. And making an object does not. There's no expectThrown on this variation because it's a success case. It's going to allow the algorithm to proceed. + +RGN: But we do care about having a receiver that allows us to detect problems later on. And so this is a helper, which I can go into in detail if we need, probably we don't. Suffice to say that what it's actually doing is taking our length descriptor, defining an object, and then wrapping that object in a proxy that will allow gets of specific properties that exist on the target to go through, and generate an error for anything else. So for instance, if this were reading the index property before it read “length”, then the value returned by this proxy generator would throw an error because you weren't allowed to read an index property. You are only allowed to read “length” in this case. + +RGN: To make the length getters these are actually really straightforward. The base case, the most “bad” variation for the earliest abort, is a getter that throws. When LengthOfArrayLike does a Get of “length”, if running the getter itself throws, then we're not going to proceed any farther. Otherwise, the output from a successful Get of “length” will be sent to ToLength. And ToLength has some substructure to it as well. It's going to call ToIntegerOrInfinity and then clamp the output. ToIntegerOrInfinity calls ToNumber. ToNumber calls ToPrimitive, and so on and so on and so on. This stack constitutes probably the most common deep cases that we define. And test262, as a general rule, expects that the implementations correspond with the spec, and that there's only a single implementation of something like LengthOfArrayLike. In practice, that seems to be true. But it might be nice to test it. This makeToNumberVariations helper allows us to do so wherever we want. It's accepting a log prefix, a function for adding to that log, some bad cases, and some good cases. + +RGN: So what we want here is basically ToNumber, all of the standard inputs, all of the things that would throw in those deep algorithms. If getting `Symbol.toPrimitive` throws an exception, if `Symbol.ToPrimitive` returns something that isn't a primitive, if the hint is NUMBER then we should get valueOf before toString, if the hint is not NUMBER, we should get toString before valueOf, and so on. All of these cases can be covered in by producing variations here and that's what this helper does. But because we want a getter, we're going to map variations produced by that helper into wrapping functions that just return them. And these are the things that we're adding in. We care about also seeing value zero and value infinity, which are not problematic outputs problematic inputs to ToNumber. They're just things that we want to see for this for testing Array.prototype.pop. + +RGN: And the setter for length is even simpler. We have an error case where it throws. As in the value is a function that throws. And specifically, we're constructing these dedicated errors so that we can differentiate the variations. This particular setter throws an instance of this SetLengthError that we defined above, and we expect that that's actually going to be thrown. And then finally, we see the good case for the setter. Which is a no op function that just logs the fact that it was called. So in getting that far, we've actually reached this step 3.a in the algorithm, exited that step successfully without an abrupt completion, and then the very next thing will be return undefined, which we don't actually care about here. We're not testing the output of pop in this scenario. We're testing that all the steps of the path through it were encountered correctly. + +RGN: And that's pretty much it. We also have this one other special thing that I put in, this UnexpectedSetLengthError, for if the setter was called before the getter. It's not a specific aspect of the input, it's just for observing the behavior. I'm going to do one more thing and then if anyone has questions or discussion, we'll go to the queue. And that is actually run this code. + +RGN: Just for demonstration purposes here, I'm logging the parsed version of what the label should be for assertions in each of these callbacks. And then when I'm all done, we want to see that it passes. So these are the scenarios that we actually encountered. + +RGN: Okay. So what we had, we would expect a large number of cases if we were naively executing them. But the implementation that I have for this testProgress function knows which step is supposed to produce the abrupt completion. That's the one with expectThrown. And so once we encounter that, we establish a prefix of variations and we can ignore every other selection with that prefix. So the first thing we have is an undefined receiver, and we actually don't care about lengthGetter or lengthSetter in this selection because an undefined receiver should throw a TypeError before doing anything with “length”. Which in this environment, it does. The fact that we don't see an exception produced by that `assert.deepEqual` verifies that. We move on to a null receiver, which behaves similarly. And then we get to make a little bit more progress. The receiver is now an object. So it now matters that lengthGetter throws because we expect to call it, and we see `expectThrown: GetLengthError` corresponds with that. + +RGN: The next lengthGetter variation returns a Symbol. Symbols can't be converted to a number, so that's a TypeError. It holds for both unique and registered symbols. Getting a little bit farther now, we're going to return a length of zero. Great. That means that we expect lengthSetter to be invoked. And there it is. With a type error. Okay. But getting the setter through an exception or through an error. So that's a problem. Advancing a little bit farther, what if we replace that setter that throws with a setter that succeeds but a value of that throws? And now we see that valueOf is called. That proceeds a little bit further. The final two we're back to a setter that throws. Okay. That looks like actually those last two look like there might be a little bit of a bug in the implementation. But you see the idea. It gets farther and farther along through these cases. And ultimately, either passes or fails. + +RGN: So we're basically doing the minimal coverage to verify that this branch through the algorithm has succeeded. And then we would do the very same thing for that other branch where the length returned is not zero. Now we want to verify that, okay, we're going to see in the log the get of that index and the exception accordingly. And so on and so on and so on. + +RGN: One last observation on this. We now have it in a single test file, not in six test files or 30 test files. In a single test file, we've covered a large aspect of the implementation of any given function. We know that the path that went through it was good because we saw the calls to user code and even for those things that didn't call user code, we saw the errors that we wanted to see. Which actually gives us a lot of confidence once an implementation passes this, that all the other aspects that do care about the behavior, that the value that was being set was correct, that the calls had the right arguments, all those, are now in a context where the overall structure looks good. So with that, I think I'm finally ready for a discussion, if any, on the concept of doing this more widespread in test262. + +ACE: I'm first up on the queue. Obviously, a lot of these things here I don't know how close to finished this is. So obviously, I don't know where my feedback's landing. There's a lot of similarities with general JavaScript testing and spies and the spies APIs where you can assert this was never called, this was always called before this, and it's perhaps just re-familiarity I find those test cases much easier to review than this. Obviously, once perhaps this would become easy to review once it's familiar seeing something for the first time, it's going to be the hardest to review. So I, was there a reason this doesn't look like a traditional spies thing where like it's manually pushing into its logs rather than just having a generic like a spy utility? + +RGN: Yeah. Two, two reasons for that actually. Number one is that test262 is built on a lowest common denominator implementation. So things need to run as best they can. I didn't even go into the implementation of that. Like that proxy thing, but basically like if Proxy doesn't exist, we still want to be able to leverage these helpers. We have to have a fallback for that kind of thing. The other reason is that we actually have an explosion of the matrix constituting all these variations. And I think that probably at maturity, brevity is going to be super valuable where any manual logging risks growing to a lot. And if it's a lot, then the test is going to get incomprehensible. So I'm definitely leaning on that pattern. Like this is internally doing all that spy logic. But ideally, it fades into the background for the test authors and the test reviewers, and we can just point them at documentation for what this kind of testing looks like in test262 and why. It's doing the same things that you're familiar with, but it's doing them as automatically as possible. + +RPR: We've only got like seven minutes left. With the current plan box. + +PFC: Okay. I was just wondering if there is any prior art for ordered spies in JavaScript testing libraries, because I've only seen unordered ones where you can assert that they were called or not. I'll skip the second part of my question. + +RGN: I mean, in testing libraries at large, I haven't done a thorough search yet. Inside of test262, we already have manual constructions of this pattern. We like there's a bunch of it in Temporal, as you know. And I want to make it like more widespread. I don't care very much about the particular interface choices. I just want to see this kind of coverage. + +ACE: Actually, I say I'll yes. It's very common. People dislike spies. They dislike spies for the exact reason that doesn't apply to test262. I hate spies 'cause spies traditionally try and test the implementation too precisely, and I can't refactor it, and I can't reorder things. That's the opposite. We're literally trying to do the opposite here. We're trying to write tests as adversarial as possible so that engines can't deviate. They can't refactor. They can't change these things around. Like if that's our goal, like like the reason people don't like spies doesn’t apply here. + +RGN: Yeah. This is a contract. And we really want to assert all the details of it. + +RPR: KG? + +KG: As a consumer of tests, this could be annoying. As an alternative to doing this at runtime, could you do this via test generation? + +RGN: I'm curious in what way you think that the implementers will need to know the internals of testing for this. My goal is that if a test fails, you'll get meaningful output like here is the exact combination of inputs to a particular algorithm. And here is the deviation in the log. What you did versus what we expected. + +KG: What log? + +RGN: We don't see it here because these are all passing. But basically, it will be the calls that were observed. And we can highlight and we do highlight the area where things went awry. + +KG: Looking at this log doesn't really tell me what I got wrong. + +RGN: Even, even if you were informed that, for example, the receiver was an object, the length getter returned 6, and the index property was not deletable? + +KG: Possibly? But this would be worse than my normal experience of looking at the test, trying to understand the totality of the test, and understanding why the test logic is different. + +RGN: Okay. Awesome. That is great feedback. I will definitely consider that use case. + +KM: To clarify, I think I maybe I'm misunderstanding you, KG, but I think it's like the, the because this is a large blob of code, it certainly feels like it would be hard to reduce where your error comes from. Like you might be able to do you might be able to deduce from the error messages what you did wrong. But like getting a small test case might be difficult because it's all in this like kind of framework thing that you need to then like extract. Yeah. But so I mean, if it output like here is like a nice mi here's a small test case that would reproduce this issue. Like it built you a little tiny program basically. That would probably be fine as well. + +RGN: I think that’s very doable. + +KG: That would be fine, but at that point, I’d construct those programs at build time rather than runtime. + +KM: The problem with that is you would end up with in some cases like a couple hundred of them, which is a great pain for test maintenance. + +KG: It's only a pain for test maintenance if you look at the generated code. If you have a directory of generated code that humans never look at, it's fine. + +RGN: In principle, yes. In practice, no. In test262, it breaks the GitHub PR interface, for instance. Imagine a contributor who checks in all the generated tests in the same commit as the source modifications, for instance. + +KG: There’s a way of marking generated files as generated on GitHub, and then they don’t show up in the PR interface. Anyway moving on. + +ACE: I made an assumption you were using proxies, but it sounds like maybe you're not using proxies. And that, that helper there is actually a. + +RGN: This implementation is using proxies. + +ACE: Right. So that right. So my comment sorry. I I think there's a risk of coverage here of assuming that the algorithm is the only thing that's happening here, whereas we know inside engines the algorithm is not once. The algorithm m-maybe it is for like new things, but over time when things get optimized, the algorithm is repeated. It's like, oh, if you have a proxy, go down this completely different implementation route. So you might look at this and go, technically we've covered all the steps, but if the inputs are controlling those steps via a proxy, we're actually only testing half the engine. + +RGN: I'm thinking about that as well. And I basically just run it in multiple modes. + +KM: Is that really part of the scope of test262? I mean, in theory, engines can do all kinds of things in some respects. It's sort of like if you're doing special optimizations in your engine, there's no way that some maintainer of an independent test suite is going to know all of the implementation details a white box like testing. + +ACE: Well, what what I'm saying, if imagine a test262 where every single object was actually always a proxy 'cause it's convenient to control it, it you could then have real websites that are using normal objects with getters and setters and they don't match the spec because test 262 didn’t cover that. + +KM: I'm not saying that, that, that there wouldn't be bugs in that code. I'm saying that, do we think that's in scope of test 262 because like in theory, those kinds of is-a-proxy check could happen essentially anywhere in any line of anything in the spec. So basically everything in the spec has to be at that point, then you might as well test nothing with proxies because. + +RGN: But it's enough that I would say it's in scope. + +ACE: I would go further and say that test262 should focus on like like the non-Proxy case 'cause most of the majority I think proxies are bad for security. We should cover them. But like if test262 only covered proxies, that's like a bad use of test 262 because the majority of code isn't. + +RGN: I mean, just to set the context again, this is not the only test coverage for any given point. Like, this establishes that you wrote the algorithm steps in the right order for an implementation. And it exists alongside all of the traditional tests which are inherently easy to understand but very narrow. + +KM: Yeah. I think that's fair. I mean, I think like in, in the same way that like web platform tests tend to cover like I mean, there are plenty of buggy web platform tests, but, the, the, they, they intend to cover the most common use cases so that like at least the they have a like 95th percentile of catching compatibility issues before they make it to the like that web developers would actually run into. I mean, that's like you know, you'll never get 100% because there's just no way to cover all the exponential combinatorial explosion that basically makes it impossible without white box testing essentially. + +RGN: And what this does is shove together the boring parts, like mixing up the order of some input validation steps. And then all of the interesting tests can be more prominent inside of the directory that contains them. + +KM: Gotcha. + +RGN: This is boring. I want it for everything, but I don't want it to overwhelm other testing. Because it's actually not interesting. There's really nothing behavioral in terms of function-specific effects. But it does establish confidence with respect to following the spec ordering. + +RGN: I think we're ready for PFC now. And that might be the last one. + +RPR: Yeah. Let's try. Okay. Yeah. + +PFC: We don't have a lot of this type of coverage in test262 currently. Maybe at most we sporadically for some methods check that such-and-such getter is called, and the argument is converted to a number before this other error is thrown kind of on a case-by-case basis. So there's always an element of "where's the point of diminishing returns?" Like I noted in my TCQ comment `{ toString() { return { toString() {…`, you can have arbitrarily deeply nested objects each with toString methods until the engine runs out of memory. And nobody's going to test that. I don't really have a sense of how likely there are to actually be bugs in this the running order of these sorts of exception throwing points or user code call points. Does anybody have a sense of how often there are realistically bugs in this? + +RGN: I do actually. I have a rough sense of it as basically a power law distribution. Most of the problems are in those dedicated top-level algorithms. So like in this case, if there’s a problem, it isn't likely to be in ToIntegerOrInfinity. It's going to be in `Array.prototype.pop()`. And even across those top-level algorithms, the problems concentrate in specific areas. I chose this one because it's a simple example, but the area where I really want this kind of testing, where it's really needed, where I really have seen issues, is TypedArray construction. There's some complicated branching. There are checks that a lot of implementations do out of order, maybe from spec evolution or maybe just because that's convenient or whatever. But that's the part where we see it. I think descending down through all the possibilities is worthwhile, if it's convenient—and I do want it to be convenient—but I definitely don't care about deeply recursive inputs. As soon as we reach the point where a getter is invoked, I think we're good. But you could also use this framework if there were a case where you cared to go deeper. I just think that mostly I would want to see coverage broadly across all the exposed surface area before I care about looking at any deeper aspect of one of those algorithms. + +PFC: Does anybody have a sense of how often it happens that engines inline these reusable operations like ToNumber? How often does it happen that some piece of code has its own ToNumber implementation because that was more optimizable? + +RGN: That I don't have. That's a question for implementers. + +RPR: So we're actually over time, at this point. Apologizes. There's still two people in the queue, but could you summarize what we've got to? + +RGN: Yeah. I am planning to polish this up and start introducing it to test262. I very much appreciate the feedback on what people value, especially for the output of this. And wanting to save them from having to dig into whatever framework we end up with. So thank you for the feedback. + +RPR: Okay. Thank you. And just to dictate for the notes, the two topics we didn't get to was from MF, this seems like it can be auto-generated, maybe reach out to the ES Meta maintainers. And from OFR, this looks like we may want a fuzzer based on this instead. + +### Speaker's Summary of Key Points + +* test262 does a poor job of algorithm branch coverage (e.g. correct order of input validation steps in implementations), and what is there is cumbersome +* this presentation presented a proof of concept for efficient branch coverage, based on successive relaxation of inputs that trigger abrupt completions from early to late +* discussion raised the possibility of doing something similar with code generation rather than dynamic evaluation, although that might be high friction for test262 contribution and review +* there’s also a potential concern around proxy-based probing being potentially incomplete due to bypassing optimizations; it might be important to cover non-proxy paths as well +* after some polish and refactoring, this will be proposed as a test262 PR to hopefully consolidate “throws” tests into one or few per top-level algorithm +* …but it will be necessary to ensure that failures are comprehensible and easily map to specific use + +### Conclusion + +* Further progress will take the form of a test262 PR + +## TypedArray concat + +Presenter: James M Snell (JSL) + +* [proposal](https://github.com/tc39/proposal-typedarray-concat) +* [slides](https://github.com/tc39/proposal-typedarray-concat/blob/tc39-plenary-march-2026/slides-export.pdf) + +JSL: I will say I'm going to hold off asking you know, just not I'm not asking for stage advance. This is really just some clarification to continue on some part of the conversation we didn't get to the other day. + +JSL: There is still a question of the naming. Of it, whether concat is something we can use or not. Right now I'm actually leaning towards from list instead. But I'm definitely open for, fo, for other suggestions. The biggest issue that we did not get to the other day was settling on the iterable conversation. And here I can let me bump up the size of this a little bit more so it's a little easier to read. There are some complexities here. So obviously if we pass in an iterable that can mutate whatever state of, of the of the other things that we're passing in, for instance, if somebody did a silly thing of transferring Uint8 on you know, the underlying ArrayBuffer. Within the iterator obviously we're changing the state. So we end up having to consume the full iterator before we actually do anything else with it. There's one. The other part of it is the thing yielding incompatible types. We end up having to consume it for there's questions of if it's a different type do we coerce or throw, right? So for concatting a Uint8Array, and we get a number like 6.1. Or if it yields a string that can be coerced to a number, are we going to coerce that? So I think that th-there's a number of complexities here for iterable that we need to understand if it's worth it. + +JSL: Now the existing use cases you know, li-like with nodes buffer, with concat, it just accepts a buffer or Uint8 array. We don't deal with these. So in terms of the ecosystem use cases that exist today, we don't have this problem. And so we're inventing new capability here. That we need to figure out how to deal with this. Now, personally, I would just say just not do iterables, but there seems to be you know, th-th-there's an argument being made that it's more ergonomic that if we if we allow the iterables to be used. So the, the key initial question is there a strong enough sense in the committee that we need to support iterables? That's the first question. And that's, that's an open question to any, you know. + +RPR: Yeah. That's o-open to the queue right now. + +JSL: Yeah. Okay. While people are thinking about that… okay. MF. + +MF: Yeah. Would this be precedent-setting for other TypedArray helpers? Like are we going to use this to justify how the findWithin-like operand would work and stuff? + +JSL: It could be. Yeah. Find within, I think there's, there's, there's definitely overlaps here. + +MF: I wasn't looking for a “could be”. Like, I feel like that's a yes or no. + +JSL: It's not until I think that the concerns here are a little more fixed. Like, like with this particular case, kind of the main thing that we really need to know, is the length of the thing and whether the types are compatible. That is less relevant for something like FindWithin. So I think the answer is yes, if we end up going with this. I mean, that you know, but yes, to various degrees. + +JSL: Now, while it looks like folks are thinking about it, the other possible approach we can take here is to just say we're not going to accept iterables, but maybe we'll accept array-likes. Right? And from what I've seen, of the use cases that have been suggested, that actually meets the majority of the use case. Which is more meeting this pattern here where you want to concat a TypedArray and then you pass in an array of elements. Array-like might actually fit the need better than just iterables in general. + +JSL: So again, you know, right now, like I said, I'm not asking for stage advancement here. This is really just a feel of the room, you know, getting a general sense of what direction y'all think this should go. I-if someone feels strongly that iterables have to be supported, okay, but if array-like is enough, then it's, it's, it-it's, it constrains the problem much, much more. RGN. + +RGN: Yeah. I definitely have the opposite feeling of that hypothetical. Iterables don't seem good here. It seems like complexity with little benefit. On array-like, fine. Sure. But I actually would want to be careful about a wrongly-typed TypedArray. Like if I end up searching a Uint8 array for a Uint16 array input, it'd probably be better to throw an error in that case than to try and munge it into some shape that could be used. But like a generic array, a generic array-like of something that gets converted into the proper typed array, that seems convenient and helpful. But fundamentally with respect to iterables, I, I would prefer not to have them. + +JSL: Okay. Thank you. PHE. + +PHE: Oh, yeah. Sorry. I got auto-corrected in my comment. It seems like this is all about sort of convenience. And, it doesn't really seem necessary. They're the tools already in the language in the APIs. To do these sorts of things with, with, and let us keep the, the concat API really straightforward. And that seems like, enough here. I mean, people who are dealing with, I deal with Uint8 arrays and binary data all the time. And, iterables never come up. So, then and even arrays, we can just use Uint8 arrayOf, so there's not like a super strong reason here to make this more complicated by, by supporting more types of inputs. Tha-that's my perspective coming from the embedded universe anyway. + +JSL: Cool. Thank you. KM. + +KM: I don't know if this is entirely related, but to the extent that we support iterables and array likes, I do not want to support `Symbol.IsConcatSpreadable`. That's like the bane of my existence. That was like the first six months of my job was implementing that feature. It was an enormous pain. For performance. So whatever we choose, I would like us to not support that. That is my only request. + +JSL: I agree. Cool. And MF. + +MF: So, so the question here seems to be phrased as like, do we want iterables or not, assuming that array-likes would be accepted. And I feel I mean, PHE kinda said, said the similar thing. Like, both of those are only being considered for convenience. So I see that a reason for including both of them is convenience. But a reason against including them is that the contents of them have this like auto-conversion to the TypedArray type and seeing you know, concat with, you know, numbers that wouldn't fit in that TypedArray type would be like a confusing way to write that thing. Whereas if you just like had the manual conversion to the TypedArray type included there, that would trigger that in your head that like, oh, yeah, there's this like, modular arithmetic going on or whatever. So, I think if I didn't support iterables, I also wouldn't support array-likes here. I'm not actually—I don't care which side of that we end up on, but I would say that it probably should be the same because we have the same reason to make that decision. + +JSL: And I absolutely agree. The first version of this just supported, the TypedArrays and ArrayBuffer. The only reason this w you know, the iterables were added here was through d because of conversation following the initial presentation in, in November. None of the ecosystem use cases of this support anything other than the TypedArray. + +\[transcript stopped\] + +JSL: user-code side effects and any kinds of things. So I would definitely prefer that we keep it much simpler. + +MF: Okay. Then you've convinced me. Like, both those seem like much stronger arguments than anything I've heard for the other way. + +JSL: Okay. OFR? + +OFR: Since you mentioned this, the detaching problem before, I think that can still happen if it's just an array. So you can just have an accessor on one of the properties and then that one can still detach the array. So. + +JSL: Yeah. + +OFR: So, I mean, we like, from a convenience point of view, we maybe eliminated a few, complicated cases. But from a spec and implementation point of view, it's almost the same. + +JSL: And KM? + +KM: Maybe this was discussed and I missed it. But if we did support iterables, would we be having, like, a check that the thing is a typed array at the beginning and then not running the iterator protocol for those ones? Or is the spec to would it just assume it's iterable and run the typed array through its iteration protocol? + +JSL: No. The it's currently written that it checks, and and and and forks in it. It's yeah. + +KM: Okay. + +JSL: But I would really love to get rid of that other one. I can just have it be a TypedArray. Cool. Okay. So unless anyone jumps up and down and throws something at me and says, "No, iterables and array likes have to be supported," I'm going to rip that out of this and then bring this back in May and by that time, probably be ready to ask for 2.7 or higher. So okay. Cool. Thank you. + +### Speaker's Summary of Key Points + +* Getting temperature of the room on support of Iterables in the concat api +* Will be deferring stage advancement ask to may + +### Conclusion + +* No iterables +* Renaming the function from concat + +## Amsterdam: Rust/JS meetup + +Presenter: Shane Carr (SFC) + +* no proposal +* no slides + +RPR: Okay. Thank you, JSL. So we're slightly ahead of schedule for the break. Given that we've got seven minutes remaining, I wonder SFC, did you want to talk about your Rust meetup? + +SFC: So it turns out that in the week that we're in Amsterdam in May is the same week that the Rust core maintainers are having their annual all-hands meeting. Also, in the Netherlands, it's in Utrecht. And it's in coordination with the Rust Week conference. I know that in the past, I remember at least once or twice, when we've been co-located with, like, other language maintainers like CSS and others, where we've sort of had, like, meet-and-greets. And given this opportunity, I wanted to survey it or see if there's people who are interested in such a meetup. I have some contacts with the Rust Week maintainers and and and such. And could possibly help coordinate such a meetup. It could take a couple of formats. One possible format is it could be, like, a social happy hour, in one of the evenings. Another format is they the, the Rust team has an all-hands meeting on, it's like a three-day meeting at this venue in Utrecht, Thursday, Friday, Saturday. So possibly Friday or Saturday, we could maybe have, like, a breakout session focused on, like, JS and Rust. What we can learn from each other. That would be more of, like, a, academic, like, meeting-type more more like this format. If we did, like, a more informal social, we could have, like, lightning talks, from both groups. And, that could be in Amsterdam or Utrecht or somewhere in between. They're, like, half an hour apart, on on the train. So I wanted to see if this was interesting to folks. Obviously, I'd want to make sure that it fits in with the rest of the TC39 programming, like the task groups that we have scheduled. And, w+what are some of the goals you would want of a meetup? Like, do you think that this would be productive and fruitful for us? to have this level of coordination? And what are some things that people would want to see, out of that out of out of this? + +RPR: MF? + +MF: I just I would like to register my interest. I think this is interesting. I just wanted to make sure that we are going to be able to avoid conflict with the TG5 meeting, which is on the 22nd. I believe that's the Friday. Or is it the case that you're considering a conflicting date? + +SFC: Okay. I'll I would I would like I mean, obviously, I would like to avoid a conflicting date and, that's why I think that a social in the evening would probably be more likely to fit in both schedules, given that we have the TG5 scheduled already on Friday. . + +MF: Also note that it's at TU Delft, which is in Delft. So, like, I don't know how far away these are. But we would have to make sure it's possible to actually do these. + +SFC: Yes. That I would not want to make a conflict with anything that we have already programmed. So, like, if we did do it on Friday, I would try to figure out if there's a way to if it's possible to fit them both in on Friday if it were on Friday. My the Rust all-hands is Thursday, Friday, Saturday. So, like, we could possibly get it on Saturday. My worry there is just that, like, everyone in TC39 is going to be gone and we wouldn't have anyone from TC39 actually show up. So that's why, like, it seems like putting it on either one of the evenings or on Friday sometime would be more likely to get turnout. I see MF nodding, which probably acknowledges that if we were to have it on Saturday, I I don't know if people would come. . + +RPR: So, whatever candidate you select, we can add this to the interest survey that we already have 'cause we know that the people that are already looking like they're going to come. So we can, you you can solicit more from that. And I I think also posting this as a a an i an issue on the reflector will also help raise awareness. + +SFC: All right. I'm also interested in, like, what are some of the outcomes? 'Cause, like, I'm happy to put this together, but also, like, I only want to do it if, like, if there's sufficient interest from this group. If it's just if it's going to be me and MF and, like, you know, like, four people, then that's that's that's fine. But then if it but if if, that might be a different type of format of event, then, like, if if we would want to have, like, lightning talks in, like, a more bigger thing, so I'm interested in if there's anyone else you you can also come talk to me during the break. Anytime today to, brainstorm about this. + +RPR: Okay. Thank you, Shane. Sounds like we're going to have a very action-packed week. Because we have TG4, the source maps on the Monday. The meeting, I think there's also a meetup during the week in the evening. And then TG5 on Friday. + +SFC: Yeah. So it'll be a very action-packed week. All right. Come talk to me, about this, during the break. And if anyone on the call, you know how to con you know how to send feedback. So please, do so. Thank you. + +RPR: All right. Thank you, Shane. So it's time for our break. So we're breaking for 15 minutes. We shall resume at 11:45. + +## Tree-shakeable methods + +Presenter: Evan B. Winslow (EBW) + +* (no proposal/PR/link) +* [slides](https://docs.google.com/presentation/d/1zy10o54PMuh2c9Liz6RH5R3bnBpe-3-sy0uEXbACzjQ/edit?usp=sharing) + +EBW: Tree Shakeable Methods for a faster web. My name's Evan. I've been trying to meet everybody. If I haven't gotten to you, it's not because I'm avoiding you or something. Please talk to me. I'm sorry I can't meet all of you on the call. But I figured I'd at least try to show my face here, so yeah, let's get started. I want to talk about Dead Code. And sort of my goals for this talk and discussion are to zoom out and get a bigger picture view of the problem of Dead Code, more broadly, particularly on the web, in browsers, like Dead Code that we're shipping to end users, in web browsers. And, try to put some kind of number on that. And then for the second part, we'll zoom in on a subset of that problem, and that's the methods part. And then, talk about practical ways that TC39 can help or why I think TC39 is well positioned to help, but one non-goal I have is to bikeshed some concrete syntax, I'm really just trying to focus on the problem space and if we all think this is worth solving, then I'll come back. But, if you don't think it's worth solving, then I'll just save everyone the time. + +EBW: Let's dive right in. So how big of a problem is dead JS on the web? And I did a little napkin math. For any number like this, there's asterisks, right? So, plenty of caveats we can talk about. But I think there's on the order of tens of thousands of years of human life wasted every year waiting for dead JavaScript to download. I have the basic math under there, and I can share more where these numbers come from over the next few slides. But basically people use the web a lot. The slowest 25% of bandwidth globally is much, much lower than maybe we are used to. And this adds up to a lot of time. When you multiply by trillions this is really bad. I would also say I do think this is a conservative number. And I can talk about reasons why that is, but I think the biggest reason for me is, the bandwidth number is optimistic. I think, because when you measure percentiles of bandwidth, you are heavily weighting in favor of faster users because they use the web a lot more often. And so they push the speed distribution up, right? If we sort of flatten this out and made it equal, every user across the globe gets an equal weight, this would strongly push the bandwidth number down and make it worse. So that, that to me is the biggest thing. But I have others, and I welcome questions about that. Let's see. + +EBW: Speaking of the bandwidth, I have a general experience of over time hardware gets better, right? I think this is basically not the case for users globally. In 2025, bandwidth was 9 megabits per second at P25. And it has gone down according to this graph recently. So far from a Moore's Law type of effect where the hardware people just save us eventually, it could be the case that this is just getting worse, right? And, if that is very unintuitive, I think it's important to remember that there's billions of people that don't use the web yet and aren't connected to the internet. Like, the world is a really big place. And that just adds up, right? So when in my research, what I see is, a lot of hardware gets cheaper over time. So that more people can buy the low-end hardware. And that is how they come online and get connected. But the absolute power of the hardware is just not going up. Over time, certainly not quickly. And according to this graph, which I'm still not sure I entirely believe, but this is from Cloudflare. It seemed like a good dataset to me. It could be getting worse. In terms of the distribution. Can I pause here and see if there is anything in the queue? + +RPR: Nothing in the queue. + +EBW: Okay. So I'll just keep going, but feel free to stop me between slides or something. Okay. Oh, I have in my notes here to drive home the point about eight megabits per second is actually probably fast. If you think about a country like Nigeria, which has 200 million people in it, their P25 is three megabits per second. So that gives you a sense of lots and lots and lots of people don't have good internet, basically. + +EBW: Despite flat or even declining bandwidth, the amount of JavaScript being shipped to users year over year just continues to climb on the top certainly on the top million sites, but it doesn't seem to matter what slice of sites I look at. This is the trend. These numbers are in the kilobytes of transfer size. So this is after GZIP, if they're even doing compression. Of the top million sites over the past five years. The way to read this is that the red line is P90. So the 10% of the top million sites had more than three megabytes of JavaScript and JSON on the wire. Which implies something like 10 megabytes or more if they were compressing that. Because GZIP is awesome and Brotli is awesomer, and everyone should totally do it, but also it can only do so much, right? Eight megabits per second is one megabyte per second. So that's three seconds just waiting for the JavaScript to download. And we haven't gotten to parsing, execution, et cetera. + +EBW: Also, this is only the JavaScript, right? But people go to websites for images and CSS, and the HTML, and those are big too. So all this adds up. And there's a lot of contention when you visit a site because all these requests happen at once, right? So you're not getting your eight megabits per second dedicated to the JavaScript; it's shared with all the other resources that the page is trying to download. So it's growing. Also, AI is a thing and it writes a lot of code really fast. And I'm really nervous about how much code it's going to write and continue ballooning this number. I see every reason to believe this number will keep going up unless something changes. I think I said earlier something about execution. This doesn't count that, but the P90 for execution time for scripts on a page is something like 25 seconds, which to me is not “oops, my site was slow”. That's broken. I'm going somewhere else. Like, people cannot use the web at that kind of speed and experience. So, this is really adding up and hurting people, in my opinion. + +SFC: I was just wondering, on that graph slide, what's that almost 20%, 30% bump in 2024? + +EBW: Oh, yes. This is HTTP archive data, and I did not investigate that particular spike. So I don't have an answer for you. Sorry. + +EBW: I mentioned earlier that I think, 0.5 if we go back to this slide. So it says here 0.5 megabytes per view is dead code. And that comes from, basically a the data that I have managed to cobble together that is, is indicating to me 60 to 80% of JavaScript on any given page is likely to be dead, at least at page load. So this particular, coverage trace is from a run I did on the mobile-specific view of a top 50 site this is I, turned on slow network, disabled caching, turned on coverage, reloaded the page, and waited until all the main content had loaded. So, like, the images, basically trying to get get LCP. And this is, like, the state of the JavaScript at that point. so I interpret this as, like, this is the dead JavaScript that was competing for bandwidth with the main content of the page. And this I will note that this is even the case when scripts are loaded async. Because async doesn't block render, but it still, like, taking up network space. So it can cause congestion by competing with CSS, and CSS does block render, so it it's even though it doesn’t async JavaScript doesn’t technically block render, for someone on a congested network, it can have the same effect. And then, of course, there's the CPU and memory usage as it streams in. 60 to 80% is just, like, very typical. This is not, like, some special thing that I pulled out of, my list, right? This has been confirmed to me by talking with MHA, from the V8 team. And I researched some academic studies that also confirmed numbers like this. And just in my own experience, like, poking through the top sites, I, I keep running into this, like, level of dead code. Okay. + +PFC: Do you have a sense of how much of this dead code is stuff like polyfills or features that many browsers have natively? + +EBW: Yes. + +PFC: Like, it wouldn't be dead if it was loaded on an older browser? + +EBW: Correct. Yes. So that is very common. I don't have numbers for you on how much of the bundle is that source. But, like, I spent way too much time looking at built code that was getting shipped to the browser and, like, what is this doing? And yes, polyfills, like, I was getting polyfills in Chrome a lot, right? This is just, like, these are not needed please stop shipping them to Chrome. But, there are reasons people do this, right? So, yes. + +DRR: Quick question. Would you prefer questions coming in on a rolling basis or till the end? + +EBW: Yeah. I think that's fine. If I need to get through something or if I'm about to answer it, I'll just defer. ACE says plus one on diving deeper. + +ACE: Plus one to kind of, what PFC was saying. Maybe this is almost like a university-level bit of research, but I would be fascinated in a breakdown of out of that dead code, how much of it was error handling in that it kind of it has to be there because there could have been errors. + +EBW: Mm-hmm. + +ACE: Polyfills versus, like, a locale, I think that would be fascinating research. I'm not saying you have to do that. And I'm also not volunteering to do that, but I'd love to read it. + +JHD: Yeah. Just, I'm sure that many polyfills that are shipped aren't needed. But, just because a browser ships a feature doesn't mean it does it correctly. And many polyfills feature detect the bugs and patch them. So it is, and although it would be ideal to only ship poly polyfill to the browsers that do need it, it would be it's exceedingly impractical, for most, like, web a web, web authors to, have that sort of refinement. But I would love to see what the number is if you ignore polyfills and I assume the problem will still be large. But, clarifying that would be helpful because I think if we just tell people to stop shipping polyfills, then their apps could break and almost certainly will. In surprising ways. + +EBW: Yes. Don't. + +JHD: Separately, we've come up with a good way to solve that problem. Where we only ship the right thing. That's the best possible outcome. + +EBW: Yeah. + +JHD: I just haven't come up with one myself. + +EBW: Yeah. I was going to say, don't break the web as the one thing maybe I put above. Like, please make your site faster. + +AKI: So this is kind of possibly veering off, into not relevant, so please stop me if it is. You know, thinking about sort of the people who are most likely to have slow internet, are relatively aligned with the people who are likely to have either slower or older devices. + +EBW: Mm-hmm. + +AKI: And I know that, obviously, browsers update themselves now generally, or, people on their app stores are much better at updating their apps than they were years ago. But those polyfills that are taking up more time for people with slower internet are they also necessary? + +EBW: Yep. Yep. That's a great question. I think it just needs more investigation. I don't have an answer today, so. Yeah. Sounds like SFC's topic is for later, so I'm going to keep going. The natural next question that I consistently get when I talk about this with folks is maybe it's just the event handlers and we can't not ship those, right? And apparently, it's not. The academic study I looked at wrote some crawler that basically clicks all the things on the page and found this number held after that. I tried, in my own experience, to load the code and try to trigger click handlers, and it takes a while to make any noticeable difference in the amount that is getting used. I will also say, executed code does not necessarily mean useful code. That could effectively be dead as well because it's not providing the user any value. But that's harder to tell, you really have to look at it and understand the program to be able to make that call rather than just it was never executed, it definitely hasn't provided the user value yet. So, even after interaction, this number remains very high. I was finding 60% very common. For the math on the 40,000-year slide, I chose to use 50 as a fudge kind of number. + +RBR: All right. So of those 60%, is it also JavaScript that is for advertisement placement? Because I know that years back, this was not so much the case. Now, everything is loading advertisements everywhere. + +EBW: Yeah. I think so again, I can give you my anecdotal experience. Take that for what it's worth. The advertisement placement, tags on your site, seem to have the same problem. They don't particularly move this number dramatically one way or the other, but they do just increase the amount of JavaScript that's loaded, right? So if we have a 60 to 80 percent dead, that pretty much seems to hold for the ads. And, the ads are more JavaScript. So yeah, it just adds to the absolute amount of dead JS on the page. + +JHD: Yeah. So it's a common practice to put the entire website's code in one bundle and it's not actually needed on any arbitrary page. + +EBW: Yeah. + +JHD: And so did this study actually navigate through the whole site as well and determine after navigation and interaction? What was unused? 'Cause that seems more realistic. + +EBW: Yeah. That's a good question. The way it worked was it sort of instrumented the code and then made sure it got behind all the dropdown menus and clicked on every link it could. I can't remember for sure about that particular question. + +JHD: Yeah. It seems like to do this properly, you'd need essentially code coverage metrics. You know, the standard statements and branches and functions and stuff. And basically do that for the entire website. On every page. And then look at the combined results to determine what was uncovered. + +EBW: Yeah. The branch-level coverage comment is interesting too because every study I've seen so far only does function-level coverage. So you could actually have an executed function that also has dead branches, but those were counted as live because they only measured the (?). + +JHD: Well, yeah. And there's actually four metrics that everything in the except Chrome's own test coverage thing supports, which only supports three. And I think it's useful to check all four. + +EBW: So my takeaway from all this is, the world's a big place. We have a lot of users. End users of JavaScript. Let's keep everyone in mind when we're designing. There's billions of people. That adds up quick. So we've spent some time talking about dead code and the problems I'm going to talk about methods next. I just want to acknowledge this is more than methods. I'm not saying methods are the only source or something. And there are many things that could be done outside of TC39. But I'm here talking about what I think TC39 specifically can do. So in the data that I gathered, it appears to me that 35% of the dead code on top pages lives directly within a function that is very method-like. And method-like means uses object literal method syntax or uses class syntax. Or uses the this keyword in a way that is not just referring to the global object. + +EBW: So I ran this on Google Search. And found that 100% of code is in methods. I was like “no, that is not a thing”. And the reason it was getting that was because they have this practice of whatever the compiler is outputting was adding this at just the very top level of the script to refer to the global object. I say that doesn't count. But otherwise, I counted it as methods. And if you want the full breakdown of the data, you can look in the speaker notes. Actually, a lot of references and more data is in speaker notes. So if you haven't seen those, please do. And I guess the callout subtitle here is this does not include dead code that was retained in other functions that are not methods. But referenced from within the method. So I yes. I guess just noting the caveats, on the numbers that I'm sharing here. + +EBW: Okay. What's the deal with methods? Why am I focused on methods? And basically, the summary is tree-shaking methods is hard. Here is an example of what happens when you try to tree-shake this, most tree-shaking just happens on the import graph. So bundlers that are broadly available and broadly used in the ecosystem do not try to introspect the content of methods to see what else they might reference. They just focus on the import graph. And so when you have something like this, the bundler basically cannot tell. Like, is M2 used ever? I don't know. Maybe M1 calls it or maybe there's some reflection thing. I can't tell. So I'm just going to keep it all. And this, as I was saying before, this becomes even more of a problem if the class is not small. If each method has disjoint dependencies that reach out to other modules to pull that code in. All of that code also gets retained as well. So this problem of dead code can be very viral, right? If something is sticky, then all its dependencies get stuck as well. And that's why it's making it really reliable to tree-shake can be super valuable. + +EBW: I have a note in here also about even if the program actually does need M2 at some point, it cannot code-split that, right? Just because of the semantics of how these things are glued together. So class methods interfere with lazy loading and code-splitting as well. Even if the code could eventually be used in some case, it's not needed right now. And that could be a possibility for an optimization lost because of the way the code is written. So bundlers have open issues that are a decade old to tree-shake class methods. And they have said no. Consistently. Like, we can't do that safely. Why is that the case? I give three examples here of things that make it hard to tell what code is still alive or not. The first one is polymorphism. So if you have the method M on an object we don't know what class was used to instantiate that object. Because no bundler widely used and available today uses the type system. So we basically have to retain at least all methods named M regardless where they come from. It gets even worse if you do something like computed property. 'Cause then we don't even know what M evaluates to. So we just have to retain all methods everywhere because we might call a thing if a computed string turns out to evaluate to that method name. And then yeah, reflection is another way that you can just get references to methods that makes it fundamentally not safe to to delete methods off of class instances or or objects. + +EBW: So maybe we should just make the bundlers smarter, right? That would be nice. Then people don't have to change their code. And the web can just get faster, right? No. Wrong. This is also really hard. Google's closure compiler is the only bundler I know of that even attempts to detect dead methods and remove them. And it does so at a very high cost to Google and to its engineers. So basically, as a company, we have been willing to say you need to follow these rules and if you don't, the bundler will break you and it's your responsibility to fix it. That is not a position bundlers more broadly have been willing to take. Because again, don't break the web, right? It's very confusing and costly to have everything work in development. Everything works when I run my tests. Oops, I shipped to production and broke my site. That's painful. And we feel that as well. Also, some of the optimizations in closure compiler are based on its own unique nominal type system, which is not TypeScript. TypeScript is generally a structurally typed type system. And so that doesn't provide the same level of disambiguation opportunity to tell which M method am I talking about. When we import we call it third-party code, external code written outside of Google, we generally have to just disable the closure compiler on that code. So we can't optimize it. Because otherwise it would just break too much stuff. + +EBW: This has also been available for a long time. But it's admittedly hard to get to the advanced optimizations. And people haven't chosen to do that. I think the bottom line here that I'd like to drive home is in order to get the advanced optimizations that closure compiler offers, you do have to change how you write code. So this is not a for-free type of a solution. And I figure if people have to change code anyways, why not provide them a way that's more reliable to tree-shake once they've changed it rather than “oops, now one of my callers broke the rules and my library's not tree-shakable anymore”. Because all the callers of the library have to follow the rules, not just the library itself. I have more in here, but I think that will do it. + +EBW: So methods are really common. People like them. But why do they do that given the costs to end users? And I think it's ergonomics. It's very convenient to write code this way. And autocomplete and method chaining are very compelling developer experience-wise. So you know, I think devs would like to not ship dead code to their users, but there's just a lot of friction and they have their own concerns and pressures that they have to deal with. So it's very convenient to write code this way. + +EBW: How can TC39 help? I think we should treat dead code in general as a language design problem. Like, do the features that we are thinking about and and the proposals we're considering hinder the fight against dead code or do they exacerbate the problem further? I would love to ship a syntax that makes methods reliably tree-shakable. That would make a huge dent in this problem, I think, for us. And then I think it's the key with shipping a syntax here is that in order to get impact for end users, we need to convince like, current devs who use methods or would otherwise use methods to, like, use the new syntax instead. So it has to appeal to them. So the the goal is to help the end user, but in order to do that, devs need to write code differently. And it would be great to make that as appealing as possible. So I came up with some quality attributes that I think should be true of any, like, syntax that we explore in this space. Reliably tree-shakable, that's like, table stakes, right? Otherwise, we haven't solved the problem. Number two is very familiar to method users, the phrase I've been using is, like, as pleasing to use as methods, or at least approaching that as close as possible. Can't have performance penalties and needs to be an incremental thing. Like, it it cannot be the case that I have to migrate my entire code base before I get any benefit out of it, right? So something that's very safe and backwards compatible during migration would be would be key. All right. To sum up, my thought is we could have classes, get written from something shaped generally like this, to something shaped generally like this. So that bundlers that examine the import graph can delete the things that are not used and then the web gets faster because we're not shipping as much dead code. So let's make it convenient to produce tree-shakable code. The end. Yeah. + +DRR: First up on the queue, we have SFC's question on how to convince decision-makers to spend time fixing it. + +SFC: I kind of also want to get to some of the more technical things on the queue, but I guess one experience I've had here is that, like, this is a super important problem and I want to make progress on this problem. But then, unless I can show my VPs that, like, I reduced the download size of Google Photos by 20% and now we have, like, 10% more daily active users, it's really hard for me to motivate this. Yeah. I was wondering generally how do we get leadership to agree to this problem? Because, like, it makes the web faster, but, like, how do we motivate that we should be spending time on solving it? + +EBW: I don't have a ton of experience motivating leadership outside of exactly the kind of thing you just said. Like, if we do this, we'll cut this many bytes and that'll increase revenue or whatever. I know for Google Search, I can't share numbers 'cause it's not internal. But sub-millisecond improvements count as really valuable, right? So find the leverage points where you can get, like, the highest ROI and hammer on that and demonstrate the benefit. I think that is the goal, right? I don't know any other path than that. + +KM: I certainly it's on the same topic, but I don't you know, I don't work at on actual websites. I write the work work on the browser. But my impression was that the actual costs of, like, networking bytes at the CDN layer and all these other places are a significant cost for most web servers. And lowering those costs in a meaningful way. Yeah. Would be something very valuable. Beyond just usage metrics improvements, it would also be data center costs that you're saving. + +EBW: Yes. The CPU of just bundling and shipping that from the CDNs is itself a cost, a very measurable, at least for us. I have definitely seen big projects that took a long time to make that number go down and were highly celebrated. So yes, that is a relevant cost. I just hadn't talked about it today, but it's worth considering. + +DRR: We've got MM on the queue. + +MM: Yeah. So you mentioned 35% of something is methods. But I just want to first first clarify, is that 35% of dead code, not 35% of code, correct? Yes. 35% of the dead code. Okay. On the sites where I measured it was in methods. Yeah. Do you have a sense of the for the remaining 65% do you have a sense of what kind of code that is? You know, what the breakdown of that? + +EBW: Yeah. So I have a very coarse-grained breakdown. But basically, it's functions that are not methods is a huge chunk. And then things at the top level of the program. I'm not creative enough to come up with any other place dead code for come from. I think, like, top-level, functions, and methods about covers it. But yeah, that was how I broke it down. + +MM: So between those two, the functions and the top-level of the program. Any idea the breakdown between those two? + +EBW: I got numbers like 20% top-level and the rest functions not methods. + +MM: Wow. Yeah. Okay. The reason I'm asking is I find it much more plausible to work on ways to safely shake the top-level rather than shaking methods for the reasons that you raised. I mean, you explained well. The reasons why it's hard to shake methods and get away with it. but I think that the and and bundlers do shake the top-level, but they don't do it safely. Okay. That's it. Cool. + +DRR: We have a reply from SHS. I think this is about closure compilation. That unexpected optimization back off also happens a lot within Google. And that was the end of the message. + +EBW: Good call out. So yes. I think I forgot to say that, but closure compiler is far from perfect and lots of new patterns are causing back off and the ability of it to to trim these methods. So the problem is getting worse for us over time rather than better. + +SFC: I was just wondering, in your experience, it seems like one of the reasons why methods are hard to tree shake is 'cause, like, you could call them indirectly in a whole bunch of ways? + +EBW: Yep. + +SFC: And I was wondering, do you feel like using more Symbol methods makes methods easier to tree shake? Because if you can find that the Symbol is not referenced anywhere, then the method is not referenced anywhere. And in that, if that's the case, then do you feel that that would help motivate the protocols proposal? + +EBW: Anything that helps move the problem in the right direction is great. Let's see. Symbol methods I think really seem like they should be a solution, but you can still reflect on the Symbol properties of a class. So it’s a matter of how risky do you want your bundler to be? And, like, at Google, we've dialed that up, like, as high as we possibly can. Like, we're willing to break things. In order to get lower code. And that would be a matter of, like, is this enough? I think symbols still would suffer from the polymorphism problem. So if you use a Symbol property on an object, like, some other object could have this Symbol property. And that code would have to be retained as well. So it would have to be, like, no code in my entire tree uses this Symbol, and then I can delete all the methods associated with that Symbol. Like, maybe as long as we ban reflection also. + +SHS: So for what it's worth, I mean, Google's advanced optimizations, we still consider symbols to be a re form of reflection. Property. It may be possible if you say, like, the Symbol itself is, like, deleted, so you can't ever access it. Property in the first place. You could optimize it, but other than that, t's not helpful. At least in our way we frame it. + +CZW: Yeah. I think it was mentioned that reflection would make the minification impossible. And especially if for approaches like protocol, if the reflection still a part of the proposal, but I think there are other proposals, like extensions and code-based proposals in TC39 that could not attach any of this message to the object itself so that this method has to be referenced explicitly. And then be invoked on an object. And I think another issue here is that do we want to convince people to write code using a class style or a function style? And one reason for a class style was that people can organize logic in a consolidated way. And they don't have to put functions scattered everywhere. And so even for extensions or code-based proposals, I think in order to adapting this problem space, they will have to also address that. How can we not push people away from, like, organizing their codes in a unified single place? So that even if they are writing their code in the new extension or code-based style, they can still organize their business logic like they used to they used can be done in the class style. + +EBW: Is there a question or anything you want me to respond to that about? Otherwise, like, happy to just have that on. + +CZW: I think what I said was replying to the previous topic. + +JSC: So as one of the co-champions of the pipe operator languishing in limbo for more than a decade, I wanted to also ask about what if we made if we made tree shake-ability or dead code elimination a language-level priority, then would things like the pipe operator and also other proposals, like the just-mentioned extensions, call-this, etc.…Would the pipe operator, which is a function of more function function thing rather than a method thing, also nudge the ecosystem more towards less dead code? Because obviously standalone functions are a lot easier to tree shake. I just saw SHS on Matrix saying that there was a conversation at the January plenary in which the pipe operator helping with tree shaking came up in the conversation. And SHS was saying that as someone who owns an optimizer—I think he was talking about Closure Compiler—he has a vested interest in moving people away from prototype methods and towards standalone functions. And so that kind of thing and he was talking about apparently there was a conversation in January about kickstarting the pipe operator again. + +JSC: And so I'm asking here, how do you see the pipe operator fitting into this? And it's also that problem we've been wrangling with over the years. Should there be more than one way to do it? Like, per TIMTOWDI, that we encourage developers to do **any** of the following: prototype methods, function functions, and, with the extensions proposal, `this`-based functions. All of those if there are more if we add stuff to the language that encourage tree shake-ability, should there be one-way encouraged or multiple ways encouraged, like pipe and, like, obviously these are all already in the language. `this`-based functions and prototype methods and standalone functions are all not going away. But as far as prioritizing making a language priority to reduce dead code and adding syntax to try to ameliorate and mitigate dead code, that kind of prep I would be interested to see how you see it and also if Steve is here, I would also be interested to hear his perspective. On that too for this big problem. And it also plugs into Jordan's topic later. So yeah, I'd be interested to hear your thoughts. And as well Steve's. + +EBW: Okay. JSC, I would love to meet and talk to you about this more at the risk of totally avoiding your question to not take up the rest of the discussion time. I'm going to say the way I am thinking about existing proposals in this space is basically this slide here. So in my mind, the process would be like here's the criteria. Like, let's weigh the existing proposals against these criteria. And, like, pick the best option. At a very, like, coarse level. That's kind of how I'm thinking about it. So yes. But I would love to talk to you more about that. So please reach out to me, afterwards. + +SHS: I hope that this is a motivation to sort of revive one or more of these proposals and kind of get them in earnest. They've been languishing. I'm going to jump a little bit is what I said is that, I think the excessive bike shedding which maybe is an indication that this isn't a priority. That if it was if people cared enough about this topic, we could maybe get past some of the bike shedding. And that's kind of what I'm hoping that maybe this is a call to to action for. + +DRR: We also have JHX on the queue. I noticed you're in a message before, but it says, "Unfortunately, hack style pipe have very different code style compared with methods.” + +MM: I'm wondering if the important issue is shakeable versus postponable. Right now, you can write methods of the form async foo and then beginning the function body with a simple await. So that all of the code in the function happens after that first await. Now let's say we had a way to say that that await doesn’t mean postpone according to the deterministic postponent logic of the language, but just postpone to some future time. What that would mean is that the code of the rest you know, that given a you know, sufficiently fancy bundler, which which is, you know, a a itself a a weird assumption, but one could imagine that code written that way with that you know, that that indefinitely postponable semantics for await that the rest of the function could not be part of the initial bundle, but be something that could be loaded lazily afterwards. And that means that you're not paying for methods that you don't use but the methods are still there in the class so they show up in all the reflection, etc. And the semantics is just very, very close to the semantics of the language that you can express today. + +EBW: So this is a very interesting idea. I had not considered it before. Would love to talk more about it. But want to leave time for others as well to comment. + +MM: Okay. + +KM: We chatted about this a little bit at dinner yesterday. So I guess I wanted to repeat that here. Committee. I could imagine proposals or new syntax in this space where just trying to be very vague and hand-wave because I don't want to tie anything down. I'm going to be quick. But, like, where somehow you can bind identifiers basically as a Symbol to some like very static analysis amenable syntax and then you you can use those identifiers and basically the same way that you use them today but you know it allows both hopefully engines on the browser itself so it's actually faster and like bundlers to to to know that the to disambiguate between these different properties effectively at compile time. Maybe with some slight optimizations and things like this. There probably are many people who would not want to write their code in an entirely different paradigm. And they've already been writing it for a long time. And so that might help also those kinds of proposals might help with adoption at a wider scale. + +EBW: Like an unambiguous reference to a particular block of code. Which then like if I don't use that I can just get rid of that code. Is like on track with what needs to happen here. + +JHD: Yeah. If one of the advantages of implementing you know, improving bundlers is that users don't have to know about it or change anything. Obviously not everyone uses a bundler. Not everyone likes using a bundler, etc. And there's many bundlers. But so there's lots of issues with that approach. But what you described really just sounds like functions to me. And as others have mentioned with almost any combination of call this pipeline extension methods and any proposals in that space. It becomes more ergonomic because you get fluent chaining. To just use standalone functions. And if we have to teach users something instead of new syntax to specifically target this problem, maybe we just teach them what has generally become a better coding paradigm in the ecosystem. Meaning don't do classes with methods. + +EBW: It sounds like you're saying you'd be okay with just introducing a syntax for call sites but not anything new for the authoring site. + +JHD: Yes. I'm saying that we should make non-class usage more ergonomic. And then the place where we teach people should be where people are already being pushed which is do classes less. + +EBW: So that that kind of an idea is exactly the kind of thing that fits in the shape of like do these you know, take that idea, evaluate against a criteria like see what pops out the other side. Type of process. Yeah. That is in the design space here. + +DRR: We've got we're going to wrap it up really quickly. PFC. + +PFC: I want to be a bit skeptical about your number four up here. Incrementally adoptable. Some of the skepticism I've had in the past about the call-this proposal and stuff like that is that you'll have dependencies that use it and dependencies that don't use it. And you'll have to mix the two and you know, maybe you get pipeline in there as well and you'll have three programming styles. And I'm going to go out on a limb and say probably 90% of working JavaScript developers don't understand how `this` works in method calls at all. And don't care. Don't have to care. Teaching somebody, "well, in this piece of code you have to use double colon and in this piece of code you have to use a period"? Not going to happen. I think that's a social problem and seems to me the technical problem of building a smarter bundler no, matter how hard that is, it's still easier than solving a giant social problem that affects the whole industry. I'd love to talk more about this but I don't want to take more time. + +EBW: Yeah. It sounds like what you're saying is there's something missing from this list. So I don't know that we have time now but we should totally follow up and figure out what needs to go on here. Yeah. James. Okay. All right. Have you hear me? Okay. So + +SFR: Steve from Cloudflare here. First off, very spiritually aligned with the goal. Right. Like I mean, I would like to see this work. Just piggybacking off JHD. Like I have gone so far as to basically stop using classes. For a lot of reasons. But one of them is this. I've had projects where I've linted away classes. I said, you cannot use class in this project because it's not tree shakeable. And I think most people don't understand that trade-off. Like this is a thing that you know, we should contend with. Would love to see it contended with a language that would be a cool option. I have talked to the Vite team about this before. So like I'm kind of curious my actual question is how have you plugged the existing ecosystem of what people are doing here. So what comes to mind is the pure annotation. Right. So you know, you put pure in there and then it will figure out, I can probably tree shake these where I didn't know if I could work for. So they've kind of invented their own thing already. I'm curious if you've thought like well, could they invent something else? + +EBW: That sounds like another thing. Totally in the design space. And to the question of sorry. To the question of like what have you done to plug into the community? Maybe I did it in the most roundabout way but like this is me hello. Part of the community. Please like plug in with me. I care about this problem deeply. Like I need your connections and like conversation. So yes. + +SFR: All right. I mean, I definitely think if you engaged with even given the Vite team into one of these meetings or Evan, I mean, I don't know how much the bundler people attend TC39. But I think they would love to be involved. Right. In making this work. + +DRR: I want to finish off with JGT. + +JGT: Yeah. So of of the features of classes that actually produce this problem, obviously enumeration of properties would be one of them. The ability to call computed property names is another. Are there any other features besides those two that cause this? + +EBW: Reflection, computed names. I think it's let's see. I can go back to the slide. So the three I guess like maybe polymorphism is just like a special case. Of this. But I think it is actually different because in the polymorphism case, that's a case where you have two classes with the same name. And so you're like not sure which one you're referring to at compile time. So you have to keep both. And then in the computed property case, you have one class with maybe many methods. And you don't know what M evaluates to. So you have to keep all the methods in that class. Are there any others? I think I don't know. I don't want to say I'm nervous to say no. Definitively. But these were the ones that I could think of. + +JGT: And why I'm asking this is that it does seem if there were specific syntaxes, that we know o+on the call side side are dangerous. Right. That you could imagine a cooperative thing between libraries and bundlers and linters. That could say, okay, based on you know, we define these three things as dangerous. Your linter can find every one of them. So therefore, libraries that participate in this world and sort of going with where kind of where JHD was going before, which is we may not need new syntax for this. It might be because a bundler is required anyway. A way that you could have a bundler helping linter that would prevent people from you know, calling this and essentially be more static analysis friendly. + +DRR: We gotta we gotta cut here. + +### Speaker's Summary of Key Points + +* Tens of thousands of years of human time are wasted annually waiting for dead JavaScript. +* We should consider tree-shake-ability in all designs for the language. +* Methods *hurt* tree-shake-ability but are very convenient/attractive. +* New syntax could help make tree-shakable coding patterns more attractive vis-a-vis methods. + +### Conclusion + +* Several proposals were mentioned as potentially helpful in this space: protocols, call this, pipeline, extensions, a new idea for deferring code in async functions. +* Several people indicated they are more hopeful about a linter/bundler-based solution than a syntax-based solution +* More discussion is needed to explore the design space + +## TypedArray findWithin continuation + +Presenter: James Snell (JSL) + +* [proposal](https://github.com/tc39/proposal-typedarray-findwithin) +* [slides](https://github.com/tc39/proposal-typedarray-findwithin/blob/tc39-plenary-march-2026/slides-export.pdf) + +JSL: This is just a quick continuation on findWithin. Like Concat, I'm not going to ask for a stage now. I just want to get some clarification on moving forward. I've updated it. I have a PR to remove contains like we talked about. Simplify a little bit. Just like with concat I'm going to be removing the iterable support for that following that precedent. The one open question we have is either naming of the new functions, which you know, pick a color for the bike shed. Or we could just skip adding new functions here in overload index of in last index of. Since we are removing the iterable support. There's actually no change in behavior that we have to worry about other than index of right now would be pass it a type grade or it always will return minus one. And this you know, with the overload, it will actually give you the index. So there is a change of behavior but there's no additional throws. And the likelihood that anyone is actually depending on it returning -1 in the ecosystem is quite low. But I want you know, before I go off and make this while we're here, I'd like to see the temperature of the room. Does anyone feel strongly that overloading index of and last index of is wrong approach that we need different functions here? And unless somebody feels very strongly about the naming, if we do have other functions, I'm going to pick something that's reasonable here because I mean, I think we could go around in circles debating the name. So I just want to basically just open it up for questions on that if anyone has any and I can't see the queue 'cause I accept posted. + +DRR: We have KG on the queue. + +KG: You said this only works if we aren't accepting arbitrary iterables. I think we could probably get away with it because the existing methods currently don't do any coercion. + +RGN: I understand and appreciate this behavior change probably doesn't break any existing code viewpoint. But the history of this committee is littered with exceptions to that rule. I'm actually not as strongly positioned as I’ve seen in Matrix. But I would really like some dedicated methods. If we want to try overloading and hope to get it. But back off on encountering breakage without stalling the proposal because we have specifically subsequence oriented methods that would be a fine outcome for me. But I do think it would be a mistake to pursue only overloading. + +JSL: I will note node there is precedence for this. I mentioned this in the Matrix as well. Node has shipped buffer index of for years. With the overload, and nobody's complained. It's not broken anything that we know of. So there's at least a very strong precedence already in the ecosystem for this. But well noted. + +RGN: It's actually an overly strong precedent, right? Because you can specify a string in encoding.. Not just limited to you know, an element or a subsequence. Yeah. But yeah, again, I would not have the overload. But I really think we probably want a dedicated method. And if problems are encountered, we'll fall back on the dedicated method anyway. + +JSL: Okay. And I will say in retrospect, if I had the opportunity to do the node thing over again, I would have pushed for a separate method there as well and not overloading it. + +NRO: I think I feel similar to RGN here. I understand that this is likely very rare that people are passing like already typed arrays to index of. But at the web scale, it feels to me like an incredibly risky change to do. We probably would need some browser that already early, maybe like in stage two, is going to ship some use counters to to like see if this is doable. I understand that people probably do not intentionally pass typed arrays in there. I think it’s likely somebody's passing a typed array there unintentionally, hoping to get the behavior that this proposal is suggesting. But they're actually getting minus one. And they might be already relying on that thing returning minus one without even knowing. So we would need here a browser that's willing to ship early use counters before making a design decision whether to overload or not. + +JSL: I can reach out to you know, the Chrome folks and see if they're willing to do that. + +MF: I just wanted to voice my support for the overloading. I was very convinced with some of the precedent that was discussed during the session and in Matrix earlier. Particularly the overload that exists in Buffer. But also overloads that exist in other languages for exactly this case. I was initially skeptical of whether that would make sense for users, but it seems it would be the case. I also understand NRO's point. And I would be fine if implementations were hesitant and wanted to get usage counters beforehand. But I don't actually expect that that will be an issue. + +KG: TypedArray usage is pretty low, and the people using them tend to be more specialized and careful, so I would be quite surprised if this broke anything. + +DRR: Queue is empty. + +JSL: Okay. So yeah, that was it. I don't think it answered my question. I will be bringing this back in May. And we'll go from there. + +### Speaker's Summary of Key Points + +* Reviewing naming and iterable issue +* Override indexOf or new function? + +### Conclusion + +* No iterables like concat +* No firm conclusion on override vs new function but committee seems to be leaning towards new function + +## Call for hosts + +Presenter: Rob Palmer (RPR) + +RPR: I think I hope everyone can agree this has been a successful meeting, at least so far. Something's very special is that we're here in person. As we do three times per year. Lots of people have opinions of where we should go and you know, where we should meet and aspirations, that kind of thing. The thing that underlies our ability to have these meetings to go anywhere is volunteer hosts. So people who can provide venues, who can cover the lunches, cover all the things that go with this. Meeting. So it's about time. We've already got our schedule for this year planned out. So we know all of 2026. What we don't have yet is the plan for 2027. But as I say, we will do the usual survey about you know, what kind of cadence, what kind of time, where people would like to go. + +RPR: But those are all wishes. The things that provide the capability for us to do this are only when we have people volunteering. So we'll post about this on the reflector in the usual way. The call for hosts. I will just bring that up on the agenda there. But yeah. So right now I'm what I'm telling you is or what I'm asking is that if you like this kind of thing, the way we meet up in person and we have these meetings and connect and want to keep it going, please do see if your company can volunteer. I think we have the semblance of perhaps one volunteer from Paris I heard last night. But apart from that, I'm not aware of concrete offers at the moment. So please reach out to the chairs. I'm available here today if you'd like to talk about it. And we'll be putting up a reflector post to make this official very soon. Are there any questions? + +RBR: I asked Chris yesterday about it and for suggesting Datadog. Now Datadog is not yet a member. And he was uncertain if a non-member is actually able to host or not. + +RPR: Yeah, it is normal that we have a member company hosting. I think there are no ECMA rules that forbid a non-member. And we'll be speaking with SHN to clarify that situation. But certainly we really appreciate the offer. + +## Thenable Curtailment for Stage 2 + +Presenter: Matthew Gaudet (MAG) + +* [proposal](https://github.com/tc39/proposal-thenable-curtailment) +* [slides](https://docs.google.com/presentation/d/1i0jY_e-A3rkd9256VzirJdPVtHInYNGjtYq_KrTBMuo/edit) + +MAG: All right. So let's just get started. So this proposal has a terrible name, and we'll probably need to be renamed as we come closer to agreeing on a solution. But let's get started. All right. So a reminder of what is the problem that we are trying to deal with, basically. So objects with a then property, which I meant to change the format of, are Thenables. So they're treated specially during promise resolution. The problem is that it is super easy for security issues to happen as a result of this. Particularly given that the hunt for the then property goes all the way up to object prototype. + +MAG: So since I last presented, I've had one more security bug that I know of that was caused by this. And since I prepared these slides a second one showed up. So this is a recurring theme. And so I would like us to try to provide some support to do something about this. And that is ultimately my goal. + +MAG: All right. To remind us a little bit of where we have been and what we've talked about. I originally came with this idea of adding an \[\[InternalProto\]\] internal slot to specification-defined prototype objects. And then basically teaching the promise resolution to use a new abstract operation, I called it get non-internal, which would stop exploring the prototype chain for a then once you hit the constructor, or for then and constructor properties when it hit the \[\[InternalProto\]\], I don't hate this solution, but there are some challenges. Polyfillability, it's a special case in a weird way. And it doesn't fix all the problems. It only really fixes the problem for newborn objects in the web platform because once you hand an object off to script, they can always put an own property `.then` on it. So you're not entirely safe. But a little safer. We had, hmm, so we've talked about safe resolve as well. + +MAG: And so the idea here was you would start the resolution process, but at any time if you discovered you would run user code, you would stop and say, "I'm going to enqueue a new job." And there's some open questions. So like, how compatible is this going to be? How do you deploy this? And I was kind of stuck here. So I did an experiment though. So the experiment is basically I implemented a basic version of SafeResolve. It looks a little differently than how I had originally pitched it. Rather than starting the algorithm steps, you instead actually just at the beginning of the algorithm, you say, "Hey, has this got any chance that it could possibly run user code?" And if it does, then enqueue a new job and wait until that job is dequeued for you to actually resolve the promise. So how to deploy this? The thing I'm most interested in is kind of making the C++ code in the browser safer. And so the way I did this was I deployed this inside of all C++ promise resolution. So all C++ promise resolution went through this path. And then I ran all of the web platform tests. These are not a perfect substitute for saying this will be web compatible. But they're also not terrible. And they'll highlight out some of the, i-if, 300 of these failed, I would immediately know, "Hmm, this probably won't fail." + +MAG: The good news is basically everything passed. The tests which failed mostly were either trying to use then to observe resolution timing, in ways that probably don't actually need to be done, or you could just adjust the expectations. Or just validating that if you break then, things don't break otherwise. And so by eliminating that problem, those tests can just be deleted. I've updated the repo README with the actual details here so you could go through that if you're interested. + +MAG: I also have run this past our DOM team because they too are very tired of the constant chasing of these kinds of bugs. It's quite nice, and they kind of support this in general. Some of the reasons being it makes resolving promise steps for web specifications no longer run script. So you don't actually have to think about whether or not you've resolved a promise. All of a sudden you're going to ‘run script’. It's hoped, by a lot of us, and my DOM team included, that this has a pretty high chance of being web compatible. The theory here being that almost nobody cares deeply about what order resolution happens. Fingers crossed. If it's adopted by WebIDL, it will make writing specs less error-prone. It means that specifications don't have to deal with what happens if, like, user code runs in the middle of some algorithm that you're writing. And this is just a screenshot of what the actual WebIDL, resolve steps currently look like. Okay. + +MAG: So to ground this kind of concretely, I have put together an approximate specification. Note, I'm not an editor. There are actually problems in here. I'll show out some of the spec text, in reverse order. And I have adopted some hints about how much you should trust this specification text. Chief among them, I put it in Comic Sans to communicate a certain level of playfulness. So this is the MaybeDeferredPromiseResolve. This is kind of your high-level operation. The important thing here, and I don't know if you guys can see my cursor, you sort of can. It's not amazing, but step three is requiresDeferred. So this is going to be do you check whether or not you need to defer the promise. If you do, then set the promise state to deferred. And then enqueue a new job and return on find. Otherwise, if it doesn't need to be deferred, just do PromiseResolve 'cause that's what you're here for. I want to give an explicit thanks to MM because MM in a TG3 review of this concept highlighted the fact that we would actually need to probably set some sort of state otherwise you could race on another promise, somebody else trying to resolve the promise with some other value. And that would be bad. So I have added this state deferred, although I've not actually run it through the rest of the specification, but it's just trying to highlight that we would need something to stop someone else from stomping on the pending resolution that would happen. + +MAG: Next, NewDeferredSafeResolveJob. How to write this, I kind of struggled with. And so I again, not an editor, and didn't want to derail too much on details because we're only going for stage two here. We don't need perfect spec text. So you know, let job be a new abstract closure that returns do the rest of the promise resolve stuff without accidentally creating another job because as JRL highlighted, the way that I had originally written this actually you could end up in a situation where you created a delay of two ticks, not one, which was the whole intent. But I didn't have quite enough time to sort that out, so hence the very long comedic method name. + +MAG: This is the meat of the object or of the problem. There is a problem in this spec text again, but the general idea is that given the value you are resolving, you want to make sure that it doesn't need to be deferred. If it or you decide if it needs to be deferred, sorry. So if it's not an object, of course you can immediately resolve it. If it has a proxy handler, then you definitely need to defer it because proxies can do anything. If it's a descriptor and it has an own property, then, but it's a getter, then it's also return true. Then loop on the prototype. This constructor lookup actually probably also needs to be a loop like this, but I wasn't 100% sure on it, so I didn't write it down that way. I also couldn't figure out whether or not getPrototypeOf can trap on a getter. So if that needs hand special handling, again, Comic Sans, trying to be not too serious about spec text at this point. Stage one going for stage two. + +MAG: Okay. So I do have some open questions that I don't have answers to and that I suspect will take a substantial fraction of the discussion time. Do we expose this to userland, which is to say, do we give users access to a capability where they can resolve a promise using the same semantics here? As somebody who really is trying to fix browsers and trying to make browsers less security-hole-filled, I don't have a strong feeling whether or not we need this for users. I think there are some people who will want it. A challenge for me being I don't know what to name this. It is like for a user, it is a very complicated operation to give a name that is not actively misleading. Like, I don't even like the name SafePromiseResolve because I think it is kind of misleading. It's like safe from what? So, if we do expose this to user land, what do we call it? Again, I don't know. + +MAG: There's still a whole bunch of work to be done. You know, I would need to do a much more fulsome validation that this is web compatible. This is hard. Doing web compat checks of this nature are very hard because site breakage, if it exists, will probably be more subtle and require quite a bit of diagnosis to actually lay the finger at this specific change. If we were lucky, we could ship it to Nightly and be able to have somebody report it quickly and then be able to bisect it down to a specific build that turned this on. But this is among the worst-case scenarios for doing web compatibility checking because this is not going to just highlight in dev tools hey, this name collided with something else. I suppose I could actually add a dev tools error that's like, hey, promise resolution order changed a little bit here. But that's a challenge. Need to talk with other specs to make sure that they're on board just because my DOM team is interested in this, does not mean that I can get the HTML editors or the WebIDL editors interested, but I'm hopeful because every browser has had problems of roughly this shape. I need to do a proper implementation, proper specification, and figure out a name. If we decided that we didn't love the idea of just doing this universally, I still think there's value in exposing the abstract operation here to other specs just so that they could use it even explicitly for newborn objects. As a way of saying, this becomes a spec culture where hey, I'm in the middle of a complicated operation, maybe I should do a SafeResolve of this promise and that is something that ECMAScript provides and supports and is okay with. + +MAG: So yes. And I expect lots of discussion, but I am hoping to take this to stage 2 today. And I have raced through this because there's not a lot here, and I gave a 60-minute time box mostly because of discussion. So let us save the time for discussion. Clarifying question, which I have entirely lost the context, MF. When was this? + +MF: You had a slide 12… Yeah. The thing that you're exposing to user land, are you asking about the predicate or the actual— + +MAG: The whole operation. So basically that would be the like maybe deferred promise result, maybe this operation. The predicate is maybe also possible, but again, same problem. And there is I have talked with the MAH before and he says they're kind of okay with the fact that this is actually a weak ability to detect proxies. I think if we gave the predicate, they might be more irked by that because then it would be a synchronous way to detect proxies. There was some comfort from the fact that it's basically you have to count ticks and it's a little bit distant. But yeah, so I think the actual question is when I said this and thank you for calling me out, I probably should have just actually written the name. Is should we expose like maybe deferred promise resolve or whatever we call it to script? + +MF: Okay. Because I'd possibly be interested in the predicate as well. + +MAG: Okay. I will write that down in my notes as something to think about. + +KG: Yeah, so my question is, would we use this in ECMA 262? And more concretely, I saw it in your README, you had an example of a CVE that would not have been solved by this, which was the thing from a year or two ago where if you put a `.then` method on Object prototype, then the async generator resolve machinery with the newly created iterator result objects would trigger that `.then`. And I was looking at this and was like, oh, this would definitely solve the issue that we had in the spec. So I'm wondering concretely why you said we wouldn’t use this there, and whether we'd use it in other places in 262. + +MAG: Well, okay. So maybe. I hadn't so in the interests of trying to not get derailed, I didn't want to use this to replace the actual PromiseResolve steps. Which would sort of broadly fix all of this, but everything would be changed from tick timing and then I almost certainly would cause problems. Targeted use of this inside of the specification, I think could be considered. And once the capability is there, people writing future specifications could say that they're like, yes, it makes sense for me to choose this option. I haven't given it great thought. I did highlight that it wouldn't have fixed that one mostly because I was not planning on touching that part of the specification. But it if we deployed it inside the specification, it would in fact have fixed that one. Yes, you are correct. + +KG: I was thinking of this as like you would use this anytime you were resolving a promise with an object that you created and did not expect to be thenable. + +MAG: Probably yes. But it makes the scope bigger and I'm trying to avoid too much scope creep here. Is maybe my biggest thing. I feel like there's a chance for us to do a targeted thing that we could get to stage 4 that could actually be deployed inside of, you know, web browsers and provide value. And then we could come back and say, yeah, but you know what? There's other places that it totally makes sense for us to do this. So that's my whole thing about scope. + +KG: Sure. Yeah. There's actually one of the only places in 262 where that happens right now. So this wouldn't yeah, I'm not attached to doing this as part of 262. But existing things, I'm, I am more concerned about doing it in 262 in the future. And it sounds like you would be on board with that, so. + +MAG: Yes, absolutely. And actually I didn't know that this was basically the only existing one that did that. So that is interesting. + +KG: There's remarkably little async stuff in the spec. It's like this and `Array.fromAsync` is the other place. I guess `Atomics.waitAsync` too. + +MAG: MM. + +MM: Yeah. So I like the semantics here and what I'm about to propose does not change the observable semantics from what you're explaining at all. It's just a different way to phrase the spec, which may be better or may be worse depending on your criteria. But it avoids introducing a new state. The alternative to introducing a new state is to resolve the promise in question with a new spec internal promise i.e. forward the promise in question to a new spec internal promise and then hold onto the resolver of the spec internal promise. That effectively makes the original promise not re-resolvable. It's sort of takes exclusive ownership of the ability to further resolve the original promise. So it's a cool way to deferred. + +MAG: Okay. I don't hate that idea. Like I, I agree that that is a nice clarification. And probably is more targeted when we have to write the spec text than having to wind a new state through everything. + +MM: Yeah. And with regard to you know, things like the you know, our meta interpreter, our JavaScript implementation of Java in JavaScript, and the ability you know, the hence by people to shim this or things based on this. Being able to do it using constructs that are or you know, how you would do it to state it in the spec in a way that's explicitly in terms of operations that are available to the JavaScript programmer, I think is good. + +MAG: Okay. I had not thought about that, but that's actually a really nice clever way to do that. So I will think about this more and I like it. + +RPR: KG. + +KG: Oh, yeah. So since there's nothing else on the queue, I wanted to give explicit support for this going for stage two. I do also think it should be exposed to user land. I would use it in user land in all of the places I would use it in the spec. So I think it's just a very useful thing to have. And a thought on the naming is that you can technically avoid having to have a new name by just overloading the existing resolve. You could just pass it a second argument. Currently we're not using the second argument of promise resolvers. So you could say, if you pass true for the second argument, then it does this special deferred behavior. Possibly a new name is better, but the thing about a new name is—if you've already vended the Promise that you are going to be resolving, there's no place to put the new name. The only things you have are the resolver functions. So `Promise.safeResolve` doesn't work because that vends a new Promise rather than resolving an existing Promise. So a second argument to the resolver function is not a terrible way of doing this. + +MAG: Actually, I hadn't thought about that, but yes, that, that is actually quite a nice way to do it. Cool. OFR. + +OFR: Yeah, I guess a question. If I remember correctly, like basically the deferred resolution will try to do everything in one go. There was like a point regarding that on the slide. And maybe I missed that, but what is, so this is actually what will make it what will introduce an observable difference, if I understand correctly, because what could happen is that the resolution itself creates another task and then that task will run after the actual execution of the Promise. And so that, that would be one way of observing that you ended up in this deferred resolution. And I was kind of wondering why is that important and can we just drop that and then you would not have this reordering problem. + +MAG: Well, so we hope it's not important that people care about the order that things actually get resolved in. It’s maybe one of the fundamental premises here. It is always possible that people rate code that depends on this. You know, they assume that some Promise is resolved and that then it will and the then handler fires. Before some other operation happens. You can write code where this is the case. I didn't quite follow how we could reorder this to avoid this though, because the general thing is you just can't run user code when you are trying to resolve. And so that is all unobservable. The only thing that is actually observable here should be the ordering possibilities. The change of ordering. And you would then to be clear, there is multiple ways that you can observe this because you would be able to see not just the fact that the Promise got resolved at a different time, like it, it was not the then like all of the reaction records did not fire when you kind of thought they ought to have. But also if you have for example, put in and the only way this happens is if you haven't in fact put in a proxy on the prototype chain, or you have put you know, a getter in the then slot, these are all places where you could and would observe different things. So the important thing here is that this does happen elsewhere. But I also think that a lot of people when they are doing these kinds of things are already trying to do bad things. And so maybe that's okay. Especially people who are putting then on object prototype. I don't think I've ever seen that in real code, but I see it in exploit code a lot. + +OFR: But can you maybe move one slide forward? \[Slide 10] Yeah, here like this, do the rest of the PromiseResolve stuff without access and creating another job. This part, right? So maybe I misunderstood. Does that mean it will resolve the Promise and then also like actually post the task to to run the pro or like to run the Promise code or would that then still be in a later tick that the actual Promise runs? + +MAG: I need to work out exactly how this ends up playing out. I think that you the way that it is currently written if you were to just sort of replace this with what I originally had, which was PromiseResolve, like just do the PromiseResolve operation, then you do in fact get into a situation where you have new jobs which would get in queue, and they would run. And JRL’s point, as I took it, was, but you're already later like you're, you're already further along in the, in the task queue. And so you could conceivably coalesce at least one of those jobs into just doing the thing now. But I haven't worked out all of the details here. And yes, this is an area where there's a gap here for sure. + +OFR: Okay. So, I guess this is what I meant. So, so if then the resolution like creates another or posts another task itself, then that task will be reordered after that coalescing and that's how you could observe a reordering and I was wondering is this really necessary that we do it, right? + +MAG: Yeah. And, and I think the reordering itself is, is yes, necessary. But the I'm hoping not too many problems and SHS’s reply is probably relevant here. + +SHS: Yeah. So I mean, my experience with trying to move from kind of a mock clock sort of situation in synchronous tests to Promise-based asynchronous tests is that it tends to be very brittle. A lot of people write a lot of tests that depend on very specific tick counts. I don't know for sure that it would apply here exactly 'cause I'm not entirely clear on what the proposal is. But my gut is that we would see a lot of difficulty in migrating. + +MAG: The hope is that because this is only when people are already doing bad things that we that, that is the only change. Basically you have to be already resolving a thenable. And so people have to actually explicitly go in and opt in to create a thenable. And so my hope is that the blast radius here is actually quite small. But it is absolutely a risk. And so I do take that point for sure. + +SHS: Those, those tests, they rely on tick counting. Like, is that, is that considered appropriate or would for instance, web platform tests accept changes to remove them and refactor them? + +MAG: Yes. Generally I think there are certain cases where we do want to, you know, provide minimum counts about ticks and things. But I think for the web platform, a lot of them would be fine. And there's not that many that actually get exercised here. It's really there are, I think the only ones that fail, there's only like eight or nine of them that fail. And they are very explicitly ones that are already trying to probe this behavior. And for at least two of them, it's like, hey, we had a security bug, so we wrote a WPT to try to make sure that we don't have that security bug again. And this proposal would sort of obviate the need for that test because it tries to observe a behavior that will then subsequently never happen. + +SHS: Yeah. I think in the case of what I'm thinking about is maybe I'll part, partly just an issue of us having a monorepo and you know, it's sort of, we upgrade the browser and then people break their tests and we're sort of responsible for that a little bit. But I think it's more of an us problem. And unfortunately we have a little bit of coverage of, hey, you've gotta get your tests together or else you're, you're going to be stuck on an old browser. So it's not a blocker, but it's something to be aware of. + +MAG: Okay. JRL? + +JRL: Hello. Okay. So SHS’s point here, I think the Promise tests that you're concerned with are user-written tests. Where they are trying to wait a couple of seconds or something like that for a Promise resolution to happen. They don't have direct access to that Promise so they can't directly await it. They're just using setTimeout or something else to, as a side channel, to get that to wait for that event to happen. With my proposed change here, there actually would not be a new there would not be any additional ticks. So all of those tests that are using setTimeout should continue to function correctly. The only change is the order of whether or not you wait for one tick before you get the .then or potentially get the .then method. And when in the old code, you would get then see it has a then method, then await, then you would wait for that after a tick and then you would continue the Promise resolution after another tick. So two ticks total. In the new one, we cannot safely get the .then we wait one tick and then we get the value, we resolve the Promise, everything else about it. And then after another tick, the Promise resolution chain continues on. So it'll still be two ticks total. And hopefully unless you are specifically testing for when the .then accessor is actually got, then there will be no other changes that are no other breakages that could happen. + +RPR: MM? + +MM: Yeah. I support this. I would like to see this appear in userland. But I'm certainly not gating anything on that, that could happen in a later proposal. That's it. + +MAG: Okay. I'm happy to hear people expressing some support for the userland thing. So I will do work to actually write out what that would look like and think a little harder about what's involved. So that's great. Give people a minute. This went much quicker than I expected. I expected this to be more controversial. Thanks everyone. + +MM: While we're waiting, I have a historic note to interject. Which is this kind of repairs the breakage that happened with regard to the original intent when we first introduced Promises. That we were trying to make it be the case that a `Promise.resolve` of `X.then` stuff would, would be safe against reentrancy independent of what X is. And then over time the breakage of that that happened was sort of inadvertent with regard to endangering the reentry reentrancy issue. So, so this, this sort of brings us this finally realizes the original intent when we introduced promises in ES6. + +MAG: Nice. I'm glad to be of service. All right. I think that's it for the queue. So I've asked for Stage 2. We've got two supports. Anybody want to object? + +RPR: So this is the official ask for stage 2 for Thenable Curtailment. I'm not hearing any objections. So I'll say congratulations MAG. Oh, oh, sorry. There's also been extra support from PFC. So congratulations MAG. You have stage two. + +MAG: Awesome. I will go update the summary in my notes. + +### Speaker's Summary of Key Points + +* Aiming to add a new ‘Safe Resolve’ (name TBD) operation which will defer execution by a tick, only in the cases where there’s a possibility that user code could run. +* MAG was curious about whether not 1) We’re OK with adding an operation like this 2) Whether or not there’s interest in exposing this to user code. +* MF asked about a predicate, and whether or not we should add a ‘would-trigger-deferred resolve’ predicate; we didn’t actually conclude on this +* Discussion included some notes: + * Should we use this internal to the specification; MAG hadn’t thought about it, but worth thinking about especially since this would have addressed the specification CVE from last year. + * MM brought up a clever way of avoiding the new promise state by creating a specification internal promise, and just enqueuing a job with the resolver function and resolving that with the ‘dangerous’ value when the time comes. + * KG suggests second argument to resolve function for safe resolve semantics +* Some concerns about brittleness and testing + +### Conclusion + +* Stage 2. +* Matthew will investigate user exposure, through second argument of resolve function probably. + +## Iterator Includes + +Presenter: Kevin Gibbons (KG) + +* [proposal](https://github.com/tc39/proposal-iterator-includes/pull/8/) +* (no slides) + +KG: So we just approved the spec text for iterator includes. But we did it wrong. We have a convention that if—so iterators are closable and they are supposed to be closed when the consumer is done with them, including if the consumer enters an error state that is not the fault of the iterator itself. And that includes, for example, passing an invalid argument to an iterator method. And we had not done that correctly here. So if you passed a non-integer to as the second argument to `iterator.includes`, it would throw without closing the receiver and leave the receiver in an unclosed state. So this is a PR fixing that. I think it should be uncontroversial, but I figured since we were here, we might as well stamp it. Does everyone approve this? All right. I hear no objections. Sounds like we're good. + +RPR: Let's just check do, do we have any support for this change? + +JHD: It did get an approval on the PR. + +RPR: You've already got two approvals. And RGN has said the same. + +RPR: Yep. There have been no objections. So the PR is approved. Thank you, Kevin. + +### Speaker's Summary of Key Points + +* There was a bug in the just-approved `iterator.includes` spec text where it was not closing the receiver if the argument was invalid. Proposed PR fixes this. + +### Conclusion + +* PR is approved + +## Towards Structured Concurrency + +Presenter: Kevin Gibbons (KG) + +* [repo](https://github.com/bakkot/structured-concurrency-for-js) +* [slides](https://docs.google.com/presentation/d/1vYh__pmW0QuIhxz9jPQbUXdkIrzD5p7Wjl6oTdKSBZI/edit?usp=sharing) + +KG: Okay. Good afternoon, or whatever, everyone. This is not a concrete proposal. It is a sort of vision for a possible place that we could go as a language, and an introduction to a thing that a lot of people have been thinking about in other languages that we haven't actually talked about very much here. I do have a repository, but it's not for a proposal. It's just basically to use the issue tracker for discussions. + +KG: Okay, so, structured concurrency is a newish idea, or at least a relatively newly named idea. By which I mean it's about a decade old, but that's new in the timescale of programming languages. The name is an analogy to "structured programming" from the famous goto-considered-harmful paper. + +KG: The idea is that you want your concurrency to be basically bound to a scope. So you have some scope in which it is possible to open child tasks, and then the scope doesn't exit until all of the tasks that were introduced within that scope have completed or shut down, and if the shutting down is asynchronous, then that requires waiting for the asynchronous shutdown Promises to complete as well. And usually, it is the case that errors in one task will cancel the other tasks. This is not necessarily implemented explicitly in the form of looking for errors in one task. Usually, what just happens is that any early exit from the scope will trigger cleanup of all the other tasks. So if you are awaiting one task and then that task throws an exception, that will throw an exception which is not caught within that scope, then that exception will propagate and will exit the scope and then scope exit will trigger cancellation on the other tasks. + +KG: So I'm not really going to defend the thesis that this is a good idea. It is implemented in a number of other languages, and I think they've all proven it out pretty well. But also, hopefully if you've done much concurrent programming at all, you can just stare at this and be like, "Oh, yeah, that would be nice." I'm happy to talk more about why this is nice if we want, but I'm not really intending this as a defense of this idea, just a presentation of it. And a vision for how we might get there in JavaScript. + +KG: Before going on to JavaScript, though, I'm going to give examples from three other languages of how this is implemented in other languages, starting with Python. So Python has this with construct, `async with`, which is their form of explicit resource management. So think like a `using` declaration, except that it introduces a new scope instead of just being ambiently in the current scope. They use it here for structured concurrency. So you can introduce a task group and then create tasks within that scope and then when you exit the `with` scope that will trigger the cancellation on the tasks. Notably here, Python does have async await. Their async await is a lot more like generators in JavaScript, or you might think of it as cancelable Promises, in that the tasks created thereby have a `.cancel` method. So what this task group is doing on scope exit is that it's calling `.cancel` on the tasks created by these async functions and that injects an exception at the current await point within those functions. So you should think of Python's implementation as basically being like explicitly resource management plus cancelable Promises. + +KG: Next up, Java. This should look a lot like the Python one. Java has `try`-with-resources, which is their equivalent of Python's `with`, which is again just like our using statement, except that it's a new scope instead of the current scope. And in Java, they use virtual threads instead of an async await syntax and so there's no explicit await points within these functions. But it otherwise works the same way. The threads get an InterruptedException injected into them, wherever they happen to be. Not literally wherever they happen to be, only at places where the VM has marked that it is potentially paused. But at positions which are not syntactically annotated. But otherwise, a lot like Python, you're invoking these functions within the scope, and when the scope exits, you trigger the cancellation method on those tasks, on those threads, which just like Python takes the form of throwing an exception. + +KG: Swift actually has something nice there, because they didn't introduce async await until after structured concurrency was sort of in the water supply. So in Swift, the way it works is that there is an ambient Task, just think of a global variable in JavaScript called Task. And tasks can check "Task.isCanceled" which is a Boolean which just tells you whether the current task has been canceled already. And at scope exit, any tasks which were introduced in that scope which have not already been awaited will set isCanceled to true. So Swift doesn't actually automatically inject exceptions; it relies on purely cooperative checking for whether the current task has been canceled. And Swift also doesn't require you to introduce a special kind of scope, in order to handle the possibility of canceling tasks within that scope, it just uses anything where there is an async function invoked and bound to one of these `async let` declarations. If those `let` declarations have not been awaited by the time the scope exit is reached, then they are canceled, or rather, the isCanceled boolean is set to true. So Swift is the weirder of the three, mostly because they got to design their feature after structured concurrency was already in the water supply. + +KG: Okay, so can we do something like this in JavaScript? So this is sort of building off of James's presentation from earlier. So I am going to take as a given that we have to use AbortController because AbortController is already what the web platform uses. It's already what most JavaScript runtimes that have things which can be canceled in any form are using; it's what user land libraries are using. I don't like it very much, but I'm just taking as a given that we gotta start with AbortController and see where we can get from there. And spoilers, we can get pretty far. + +KG: So this slide is what you would do today if you wanted something like structured concurrency. This isn't actually literally structured concurrency. You can hopefully read the comment towards the bottom of the slide here. But basically the idea is that you introduce an AbortController and then you manually thread a signal into every task that you initiate. And then at the end of the scope, or rather not at the end of the scope but with a finally block, you call `controller.abort`. And that will set the signal's state to be aborted. So just like in Swift, this doesn't automatically cancel the tasks. But the next time any of these tasks check if the signal has been aborted, which there's a convenience method on the signal for doing so, they will trigger their own cleanup, presumably. But again, we don't get to wait for the cleanup; `controller.abort` is strictly one-way; it just sets the state of the signal and triggers any manually registered cleanup callbacks and does not then wait for any work to finish. + +KG: No one actually does this. In my experience and from reading a lot of code on GitHub, literally no one puts that finally block there. The `controller.abort` is only triggered when there is some other case that causes cancellation, not when you're just doing a scope exit. And this kind of makes sense: finally blocks are not a widely used feature, and this pattern is pretty ugly: you're introducing the controller outside of the scope. And also cancellation doesn't happen until after your catch block. So if you had something long running in the catch block, the other tasks would still be sitting there running. + +KG: Okay, so you can do a little bit better today by using our form of explicit resource management. AbortController is not currently disposable, but we have a mechanism in the form of DisposableStack for adopting non-disposable resources into the `using` syntax. And that's `DisposableStack.adopt`. And so this is basically equivalent to the code on the previous slide, except that now we will actually be calling `controller.abort` at the end of the try before the catch runs. So it's already a little bit of an improvement. On the other hand, no one does this either. Partly that's because the `using` syntax is quite new, but I think also it's just a lot of concepts to keep in mind. Not quite half of the code on the slide is boilerplate, but you do have a fair bit of boilerplate. And again, this still not actually con structured concurrency 'cause we're not waiting for the cleanup to complete. + +KG: Before I keep going in this vein, that's where we are today. That's the best we can do given the features currently in the language without introducing any new APIs or whatever. Do we have questions on the queue before I move on? + +DRR: You have no items in the queue. + +KG: Okay, great. I am assuming this is all clear and obviously correct then. Let's keep going. Okay, so how can we make the situation better? Well, the first thing we could do is make AbortController disposable, or introduce a subclass of AbortController which is disposable. Some debate on the topic of which of those two we should do, but for purposes of illustration, I am imagining that we have an AbortController subclass. The only difference is that it has a `Symbol.dispose` method, which calls `controller.abort`. This would improve on the situation in the previous slides. Because now we don't have to have a separate DisposableStack, which is pure overhead. And this already is kind of clean, I think. It's not ideal. There is still some manually juggling the signal and everything. But at least you're not like having to do a bunch of work to get automatic cancellation of outstanding tasks. I think this is independently motivated. It is still not structured concurrency because of the caveat about `controller.abort`. But I do think it's independently motivated. + +KG: However, what if we wanted to address that caveat? So let's go back and ignore the possibility of disposable AbortController and instead just try to fix the issue where we don't wait for cleanup. Maybe we can imagine having a version of `controller.abort`, which just returns basically a `Promise.allSettled` for any Promises returned by the cleanup callbacks. This, what's on screen is already technically structured concurrency, or almost. If we then combined it with the previous idea of making AbortController disposable, this is structured concurrency. We have introduced a scope. Tasks which are invoked in the correct way within that scope will be cleaned up at scope exit. Again, assuming that the tasks are cooperating with that properly. And you don't have to introduce a `finally` block or anything. And we will wait for cleanup to complete before proceeding past the end of the scope, before even entering the catch block. + +KG: So this gives us structured concurrency with no new features in the language, just new APIs. I think that's kind of great. So my main thesis is that we should do something like this. + +KG: On the other hand, how would you actually use this? So let's take an async function and imagine how we would actually implement it. So remember that signals let you register a callback to be invoked. Here I've written it in the form of addAbortCallback rather than what you actually have to do today, which is this horrible addEventListener machinery. addEventListener also doesn't have any possible affordance for returning a value from the cleanup handler. So I'm imagining that we add an addAbortCallback method to signals, which allows you to register a cleanup function and which allows you to return a Promise from that cleanup function which will be awaited by the `controller.abort` Promise that I was talking about on the previous slide. + +KG: So imagining we have all that, how do we actually wait for cleanup in the async function? Well, what you have to do is register a callback which is only going to resolve when the body of the function has completed. And you can do that by manually returning a Promise from the abort callback and then manually resolving it at the end of the function. And you have to remember to do that in a `finally` to handle any early exits or exceptions. But this works. This does technically work. But this is gross. + +KG: Also because we are passing a closure to `signal.addAbortCallback`, and we are never then removing this closure from the signal, this will keep that closure alive—and in actual engines, that closure will keep probably everything in scope alive—until the signal itself is cleaned up, even if this particular task completes much faster than the other tasks. + +DRR: We do have a question from Matthew Hoffman. + +MAH: Yes, right. I think you're actually addressing it. Could you go back one second on the other slide, that you just showed? This one. Yes. This one. I suppose the addAbortCallback is rejects. Right? No. I'm confused as to what the goal is here. I assume you're basically trying to have the AbortController being able to wait until all the reactions from the abortion have executed. + +KG: That's right. Yes. So that's the idea here is that the `controller.abort` would return a Promise and that Promise would only settle when all of the Promises returned from the cleanup handlers had themselves settled. + +MAH: Got it. So going back on the other slide that you just had right now, if anything throws, how do you ensure that—oh, because it's `finally { resolve() }`. Got it. All right. + +KG: Yes. Yes. But you are right to be like this, it is not obvious that this works or even that this is the code that you would want to write. And I'll get to that in a second. But first I want to address a separate issue with this code, which is that it's keeping the closure alive. When we did signal addAbortCallback and we added this closure, that closure will keep everything in scope alive as long as the signal itself is alive, even when the function itself has already completed normally. So if the function completes before other tasks in the scope have completed, we are still keeping everything in the function alive. + +KG: Right. So let's try to address that first problem and then we'll get back to the ergonomics. So the easiest way of doing that is to just have some way of removing the addAbortCallback method. In the prevailing pattern with things like addEventListener, you get a removeEventListener function that you have to pass the same function to. We could do that. I think it's pretty gross. I think it's a lot nicer if addAbortCallback just gives you a capability that lets you unregister the function that you've added. Supposing you have something like that, as long as you remember to unregister the callback, then you at least avoid the problem of the memory leak. On the other hand, this is even more gross than what we were writing previously. But at least it doesn't leak memory. + +KG: So now let's try to get to the ergonomics here. The simplest thing that we can do is to just make the token that I introduced on the previous slide itself be disposable. And then you don't have to manually unregister at the end of the scope. You just say, I'm going to do `using token = signal.addAbortCallback()`, and then at the end of the scope, the token is disposed and that automatically unregisters because we no longer care what's going on with the signal because we're done with the function. So okay, that's already an improvement. And I think this again is independently motivated. Like leaving aside all of this stuff I said about async cleanup and structured concurrency and whatever, I think this is independently motivated. The issue where you attach a cleanup callback and then never remove the cleanup callback is actually extremely common when using AbortController today. And I think this would be a step towards avoiding that very common leak, even completely independently of trying to get structured concurrency. + +KG: So we're getting somewhere, I think. But you know, this code's still pretty gross. The `Promise.withResolvers`, it's basically a magic trick. Like from the point of view of the user, it's like, what are these first two lines doing, and what's this try/finally? So let's introduce a helper so that you can abstract out all of the magic here. And this is just a structural rewrite of the code on the previous slide. Anywhere you have a try/finally, you can instead have a disposable resource. So let's introduce a helper that returns a resource that unregisters the token and calls the resolve method of the Promise that we were returning from the cleanup handler when the resource is disposed. So if you call this helper at the very first line of your function, you just say, `using whatever = mustComplete(signal)`, then you don't have to write a try/finally at all. And in fact, you don't have to write anything else in the rest of the body of your function. That the function's Promise will be awaited in `controller.abort`. And if you get to the end of the function before `controller.abort` is called, then the cleanup handler will be removed and you don't leak any memory. + +KG: It's even nicer if we could make this a built-in, because that helper was still sort of magic. So let's just say, signals have a prototype method that is just the thing that was on the previous slide, so `using whatever = signal.mustComplete()`. + +KG: I still don't love this, but it's the first slide in the last five that I have not immediately hated. This is not the worst. + +DRR: We do have a clarifying question from WH. + +WH: Go back… a little more…. \["Maybe: with a helper?" slide\] I noticed that in the different solutions you flip the order of `token.unregister()` and `resolve()`. Does that matter? + +KG: It should not matter. The resolve will—it just shouldn't matter under any circumstance. That was just an oversight on my part. Sorry. + +KG: Okay. So where would this leave us? I think these are all independently motivated steps. I think it is useful to have a disposable AbortController. I think it is useful to have an AbortController that allows you to wait for asynchronous cleanup tasks. I think it would be useful to combine them. I think it would be useful to have an addAbortCallback instead of having to use addEventListener. I think it would be useful for addAbortCallback to return a token that allows you to unregister the abort callback. And I think it would be useful for that token to be disposable. And assuming we had all of those things, I think it would be useful to have this mustComplete helper for the case when you are combining the async disposable AbortController and the unregister tokens. + +KG: This isn't the best possible version of structured concurrency. The most annoying part of it is, of course, that you have to thread the signal through everywhere. And that you have to manually do `signal.throwIfAborted` everywhere. And if you are doing a try catch around anything where there's a signal throwIfAborted, you have to put as the very first line of your catch basically a rethrow. You have to do `signal.throwIfAborted` again in the catch handler. I don't love any of these aspects. On the other hand, this gets us to something which is at least workable. + +KG: There are already user land libraries which are actually, I think, an improvement on this. For example, there's a library called Effection. It asks you to write generators instead of async functions. And because you're writing a generator, the generator can be driven from a library in user land and can inject a return completion at your await points, or rather your yield points because you're using a generator, not an async function. And that is in some ways nicer. That doesn't require you to thread signals around everywhere. It's actually quite close to Python's implementation. But it requires you to write generators everywhere. And it doesn't automatically work with AbortController, which is the thing that's already in—not the language, but the platform, and which I think is kind of inevitable that we put in the language. + +KG: Yeah. That's basically the talk. I did have one further idea, which is that, instead of this `signal.mustComplete`, you could in principle have a version of the Promise combinators that is attached to the AbortController. And so in this case, you would know that the Promise is from the task because you are passing them into the controller. This would allow you to skip the boilerplate in the callee, the `signal.mustComplete`, except of course you do still have to pass the signal in. On the other hand, if there's any other early exits from this function, then it's not going to wait for the cleanup for those functions. So I don't really like this idea. I just wanted to suggest it as a possible other thing that we could do. + +KG: Anyway, this isn't my ideal version of structured concurrency, but I think it's a feasible version of structured concurrency. I think we should take as a given that we are not getting the third-state cancellation at this point in time. Similarly we could say that actually it's possible to inject exceptions at await points that don't come from the value being awaited, but that rather come from outside of the function. But I don't think we're going to do that at this point either. Async functions already shipped, and they don't do that. + +KG: So yeah. That's my talk. We can get to the queue. + +DRR: Okay. First up on the queue, we have ACE. + +ACE: First of all, I love all of this. I love the recursive application of whenever you see try finally, that looks much nicer with our new, explicit resource management. With your comment on generators, just again, for people that might have heard that and gone: “well that sounds great. Why don't we do that?” One, frustrating thing with doing that in TypeScript is TypeScript doesn't have high kind of types. So when you await something, in an async function, TypeScript knows that the result of await is based on the right-hand side. Like if I wait a Promise of a string, then I'm going to await's going to give me back a string or throw, you don't get that with yield. Yield doesn't have a, like, oh, every time I yield, pass that type through the resolve higher kind of type. So what the libraries that do this look like is you have to await and yield. So that you get the kind of type resolution of await, but then the kind of structure concurrency or async control flow of yield, so it's like it helps a little bit, but it's still like not a great solution here. + +KG: Yeah. And typing generators is quite hard because the `.next` method—the value of the yield is just whatever the argument to `.next` was. And that's only one function. So it's hard to have more than one type. But in real life, you are awaiting different values of different types, and you expect that to work. + +DRR: We also have MAH. + +MAH: Yeah. So I mean, one thing that might've been easy to miss in your example is that this requires you to write `return await`. In the functions you call, whenever you have a using. I think there's been a lot of, like, this is something people often forget to do. Especially because linchers for a long time and possibly still now, like, tell you the await isn't necessary in those cases. We're really, if you have a using in scope, it isn't. + +MAH: So I just wanted to note that. Like, anything that, that, that relies on using and something in the scope, in scope, resource management, will run into that gotcha that, people will forget to write return await. + +KG: Yes. That's a great point. On the other hand, you will at least do all of the cleanup in the function itself. The only thing that you're not waiting for is, any work that's being done in the Promise that was returned. And if that function is itself written in this style where the function takes the signal, then you will still be waiting for work done inside of that function. So it's only either user land functions which are not cancelable or platform functions which don't do this signal mustComplete stuff. And that is still a problem, but the scope of the problem is smaller than you might think. + +ACE: There's an eslint rule for this. + +ACE: So I can make a guess, but I'd be curious to hear explicitly the—everything you've shown is technically outside of the current remit of TC39. Like this could all be done today. We've already like Ron's already kind of gotten us to the place where we can do this. So the, is effectively the relationship here one that this all affects us? And two, if we do want to, say there is a canonical, abort signal protocol in the language for other things, then th-this is the intention to add this to like WHATWG. So like we're going to end up with this eventually. So we kind of need to ag now is the time for us to be happy with that because this is what's, this is the, like, the train that's heading towards our station. + +KG: So a bit of that. Certainly if we can't do this in TC39, I will personally want to be pursuing it in WHATWG. But my opinion is that TC39 is the correct venue for this. And I think that we have a path towards doing it in TC39, which looks like basically—so James didn't talk about this in great detail, but I think we need to have cancellation in the language. I think cancellation in the language needs to basically be an extension of what's already in WHATWG, which is to say AbortController. So my vision for the future looks like, add AbortController to the core JavaScript spec. Not all of AbortController; in particular, AbortController is currently an event target in WHATWG and in Node and I don't want to pull in the event target machinery, but I would say that there is a class AbortController that has an abort signal and that the abort signal has an addAbortCallback method and the controller has an abort method. And we could just say those things, and that would not require any changes in implementations that currently have an AbortController because they would already have that. And then once we had said that, we could start making these improvements in TC39, which I think is the correct venue, because this is really a language level facility. + +ACE: Thanks, I'm pleased to ask the question 'cause I wasn't aware you were going to go there, and I'm pleased that's where you're going 'cause that sounds great. + +DRR: KM? + +KM: Carrying on what you said there, our perspective is that anything that we do with related to cancellation has to fit nicely with AbortController. As far as web developers are concerned, this is all one and the same. The fact that these are in different standards bodies is not relevant. And we don't want to have a weird TC39 cancellation that doesn’t fit nicely with the rest of the web platform's one. + +KG: Yes. I've been taking that as an axiom. Which is why this presentation all takes the form of, here are some extensions to AbortController. Even though I think that it leaves us in a place that is not necessarily as nice as Python or Java. I think it's the nicest place we can get taking that as an axiom. And I do also agree with it, web developers know that AbortController already exists. + +DRR: ABO? + +ABO: So, one thing that we had been thinking about for the same conduct proposal is to have some way of, well, to use async context to have an ambient, abort signal. Rather than having to pass it, manually to each function. And it seems like it would fit very nicely with this. This is not something that we're, like so the plan for this would, would probably have been to pursue that after, disposable async context. And, maybe, like or at least, in order for it to fit here, it would have to be pursued after abort, dispose, after disposable async context. And, yeah, this is not part of the current web integration of async context, but it's, it is, it was a follow-up that we had been thinking and planning to propose to WWG after the, main web integration of async context was, was done as, as a follow-up. And it seemed like it could fit very nicely here. Maybe, it's, async disposable would also set the ambient signal. But maybe you would also want to have, want to set, an ambient signal with a different disposable. And it seems like it would fit nicely, together. + +KG: Yeah, the idea of doing something with async context around this had occurred to me. But I haven't thought much about it. The idea being that there would just be like an ambient global level one basically what Swift does? + +ABO: Yeah. + +KG: And then whenever you started a new context, you would push that, basically. + +ABO: Yeah. + +KG: Yeah. That sounds like a great idea, especially if we can make built-ins respect it. That would be so nice. + +DRR: We've got a reply from Stephen Hicks. + +SHS: The ambient AbortSignal async context variables are certainly something that we should look into. And if it doesn't happen on the platform level, we'll probably experiment with it at Google anyway. + +DRR: Great. Ashley? + +ACE: Initially, I really liked the idea of the ambient context abort signal. I was randomly watching—I don't do any .NET, but I was watching a YouTube video about cancellation token and .NET. They went that route and then actually backtracked. They found the ambient abort signal actually led to some bugs where people thought it was being ambiently passed, but it turned out the thing was never consuming it. And they—at least the outcome from the video I watched was that they backtracked and they went back to preferring telling people to explicitly pass it down. So it'd be good to maybe make sure we don't repeat a mistake if there's something to learn there. Maybe there are reasons that won't apply to us. Like it's very subjective, but I'll just say maybe this isn't as great an idea as it sounds or maybe it is. + +DRR: A question from Mark? + +MM: Yeah. You mentioned earlier that, had our history been different, we could have had three cancellation states, what would those three states have been? I just, I didn't follow that at all. I don't necessarily mean three cancellation states per se. + +KG: Consider the following: + +```javascript +async function f() { + let state; + + try { + await promise(); state = 'completed'; + } catch { + state = 'errored'; + } finally { + console.log(state); + } +} +``` + +So in this code, it is an invariant that the `console.log` will print either completed or errored, or I suppose never run. This is true of async functions. It is not true of generators. If we had written this as a generator, and done `yield` instead of `await`, you can enter the `finally` without either proceeding past the `yield` directly or entering the `catch`, because generators have a `.return` method. So that's all I meant by third state. We could have done the same thing for async functions. We could have said that you can interrupt an await point without getting an exception from a Promise. + +MM: Okay. I understand. Thanks. + +KG: But like I said, yeah, I think that ship has sailed even though there are things to recommend it. + +RBN: So I'm not sure if you've had time to look in Matrix, I posted a rough idea. So I like in a way, I kind of like the idea that you had or presented with how Swift does things, except back when we were first talking about cancellation a number of years ago, there was this contention going back and forth about whether or not cancellation should be something you explicitly pass around, or should it be something that's just ambient in the environment. And you could just cancel any Promise and then it would just cancel the operation and things would happen magically. And that's where the problems lie because you can have code that requires asynchrony and must complete and does not want to be canceled that is in the middle of other code that can be cancelable. And that's where it's useful to be able to pass a signal around. When the first discussions around having an ambient signal came up, those were the problems that were presented and that you had to have a way to be able to opt out and/or to opt in, and not everybody writes codes that expects cancellation to work. So it's in a way better to force people to opt into it so that if you do support it, then you take it. If you do want to pass things down, you pass them down manually. So that's generally been the principle that they use in .NET and the one that I've primarily promoted. That said, I did take five minutes during your discussion and threw together an example that uses `AsyncContext` to pass along an `AbortSignal` that would be aborted if an exception is thrown in any of the pending operations, with the caveat that if you wanted to participate in this, you still had to look at the global `Scope`. It wasn't just magically happening in Promises. You had to do look at a `Scope.isAborted` or a `Scope.signal.aborted`. So there's the potential that we could do something that introduces an ambient `AsyncContext` that if you want to opt into it you can, but you still have to talk to the signal. My only concern is something you stated a moment ago, which was that (paraphrasing) "it would be great if we did something like this with built-ins that just magically supported it". I would definitely not want to support that because I want a `fetch` to happen and don't want some random person who calls me to abort that operation because I'm doing a POST and I need to make sure that gets completed. It still needs to be explicit. + +DRR: We got a reply from WH. + +WH: In Swift cancellation doesn't happen automatically. It's done by you polling to see if you were canceled. So you think that would work fine, but the problem that people have been running into is, if you need to do something atomically regardless of whether you've been canceled, you can just do it, but as soon as you call any library which responds to cancellation within that atomic code, you have a problem. + +KG: Yeah. Yeah. That's an excellent point. So yeah. Okay. I now think we should not do anything that is automatically respected by built-ins. But also the whole async context thing would definitely be a much further follow-on. + +DRR: CZW? + +CZW: Yeah. I would like to have a quick clarification question about what RBN said. I, I would like to know, is the issue you mentioned about like, I don't want the synchronous code being aborted, as is it more a question about declaring a safe point to cancel or is it about I don't want the parent signal being implicitly being passed to a inner task that I don't want it to be canceled in? + +RBN: Yeah. I would say it's a bit of both. The concern is that the author of a package may export an async function without expecting cancellation to be involved whatsoever. They're expecting their code to run to completion. They may not have considered safe points and how to back out in those situations other than exceptions being thrown. They may not have considered that built-ins they are executing might be canceled by some caller up the chain that they're not aware of. So it could be library code. It could even be their own code. It's just that they have not planned ahead for this, because when they wrote that code, this feature didn't exist and then we suddenly introduce it and they're code is running in the middle and now all of a sudden their code starts breaking because of something that their caller is doing that wasn't a feature before. That's a problem. So it's both of those things. + +DRR: We have a little bit of space. Remaining. So we're going to extend it out another couple minutes. So next up on the queue is JSL. + +JSL: Yeah. I, I make a quick, one, one point on this existing uses of abort control and abort signals do not need to change, but that doesn't mean new cases have to continue to use those. So if we adopt this, you know, I don't want to be. Constrained to that, that existing API and I don't think we should feel constrained to it as long as it can be retroactively fitted into abort signal and AbortController. And then on the ambient signal, I have experimented with this. There are definitely cases where it works fantastic and definitely lots of edge cases where it just falls down completely. if we approach it from that angle, we need to be quite careful of those edge cases. They are pretty sharp. + +DRR: Okay. And SFC. + +SFC: Yeah. Thanks for the presentation. This was, this is, this was good. Having not thought as much about this problem space as, as, as you have and many of the others in the room have, it seems to me like if you're inside a `Promise.all`, you have access to the Promise that was returned from the, all the functions that, that, that you were calling along the way. And, yeah, yeah. So if ins, if one of, if inside I think that's more, yeah, I think something sort of like this. So, so like if you're ins, if one of the Promises fails, then like you can set state on the other Promises in the `Promise.all`. And then and then if we make the await syntax like generate a check that checks the, the field off of that Promise, then that would be a way to send the signal without having to like pass it as an argument. Right? + +KG: Yes. And we could have done that. I think that would be basically cancelable Promises. And I think we have gotten to a point where that's not going to happen because we have AbortController. Like KM said, it is the opinion of at least some browsers, and I think all of them, that any cancellation that we get has to be an extension of AbortController, which does not work that way. AbortController actually isn't tied to Promises at all. You could do something like this where instead of `Promise.all` setting state, you have a `controller.all` that cancels its corresponding signal. And that would work. It doesn't do anything magic with await syntax. But it would cancel the signals. The downside of this, like I mentioned, is that any other early exits from the scope wouldn't be able to wait for the cleanup for those Promises. + +SFC: It seems like what's nice for JavaScript's might be to do something more like that, but I understand we want to be good citizens when we go into the web ecosystem. So yeah, it's a little bit of a tricky situation. + +KG: Yeah. It's not my preferred world either. All right. + +DRR: ACE, do you want to take the last response?. + +ACE: My gut my gut feeling on `controller.all` is it feels like a slippery slope because that seems to suggest `controller.all` settled, `controller.all` keyed and I, don't want to I can't, I'm not going to enjoy being the kind of like trainer doing classes trying to teach people the difference of when they should use `Promise.all` or `controller.all`. Is my gut reaction here. I can see the convenience of it, but I also feel like it's going to be a really confusing thing for people to learn. So that's, I want to think more about that area. + +KG: Yeah. To be clear, this is only an alternative and I'm planning on not doing that. + +KG: Okay. Thank you for coming to my presentation. It sounds like people are not opposed to this direction. So I will probably need—I'll try to put something together more concrete. But right now, the addAbortCallback is something that is proposed in the DOM, in WHATWG. Because that's where AbortController lives. And we have not done anything like this before where we sort of took something from another spec and upstreamed it. Not counting TypedArrays. So this will involve more collaboration across specs than usual. I have found it kind of hard to get any response from people working on WHATWG specs, so I'm hoping that the browsers can help out with that some if this is a direction that browsers have any interest in. Also, separately, if we do this, it is definitely foreclosing any other sort of structured concurrency. Any further structured concurrency will be built on top of this. I think that's okay. I think the ideas around maybe using an async context are kind of neat and would be definitely a possible follow-on. But I want to make sure we know what we're getting into if we start going this direction. Okay. Thanks all. + +RPR: Thank you, Kevin. You have finished within the time. + +### Speaker's Summary of Key Points + +* Structured Concurrency is great +* AbortController is here to stay +* I think we have a path towards structured concurrency as a series of independently-motivated extensions to AbortController and AbortSignal, making heavy use of the new `using` syntax +* Safari, at least, is committed to only having AbortController-based cancellation, no new separate kinds of cancellation + +### Conclusion + +* General support from the committee for this direction.