diff --git a/apps/demos/Demos/Chat/AIAndChatbotIntegration/description.md b/apps/demos/Demos/Chat/AIAndChatbotIntegration/description.md index b53fb1b2a7b4..f736a2bd0eac 100644 --- a/apps/demos/Demos/Chat/AIAndChatbotIntegration/description.md +++ b/apps/demos/Demos/Chat/AIAndChatbotIntegration/description.md @@ -15,4 +15,8 @@ The AI model outputs responses in Markdown, while the Chat requires HTML output. ## Default Caption Customization -The Chat component in this demo displays modified captions when the conversation is empty. The demo uses [localization](/Documentation/Guide/Common/Localization/) techniques to alter built-in text. \ No newline at end of file +The Chat component in this demo displays modified captions when the conversation is empty. The demo uses [localization](/Documentation/Guide/Common/Localization/) techniques to alter built-in text. + +## Speech Recognition + +Users can enter voice-driven messages when [speechToTextEnabled](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#speechToTextEnabled) is set to `true`. Our Chat component uses [Web Speech APIs](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API) for speech recognition. Use [speechToTextOptions](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#speechToTextOptions) to define custom speech recognizer settings, handle related events, and customize the speech-to-text button appearance. \ No newline at end of file diff --git a/apps/demos/Demos/Chat/Overview/description.md b/apps/demos/Demos/Chat/Overview/description.md index 5991dc8a0b31..194bae4fbf2e 100644 --- a/apps/demos/Demos/Chat/Overview/description.md +++ b/apps/demos/Demos/Chat/Overview/description.md @@ -24,4 +24,8 @@ Each message includes information about the sender ([author](/Documentation/ApiR If a user enters a message, the Chat component raises the [messageEntered](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#onMessageEntered) event. Use the event handler to process the message. For example, you can display the message in the message feed and send the message to the server for storage. -When users start or complete text entry, our Chat component raises [typingStart](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#onTypingStart) and [typingEnd](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#onTypingEnd) events. Use these events to manage the [typingUsers](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#typingUsers) array. The DevExtreme Chat uses this array to display a list of active users. \ No newline at end of file +When users start or complete text entry, our Chat component raises [typingStart](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#onTypingStart) and [typingEnd](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#onTypingEnd) events. Use these events to manage the [typingUsers](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#typingUsers) array. The DevExtreme Chat uses this array to display a list of active users. + +## Speech Recognition + +Users can enter voice-driven messages when [speechToTextEnabled](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#speechToTextEnabled) is set to `true`. Our Chat component uses [Web Speech APIs](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API) for speech recognition. Use [speechToTextOptions](/Documentation/ApiReference/UI_Components/dxChat/Configuration/#speechToTextOptions) to define custom speech recognizer settings, handle related events, and customize the speech-to-text button appearance. \ No newline at end of file