Hello August! ๐
Today, we landed the biggest update ever for ChatUML!
๐ Markdown support in chatโ
From now, all chat messages will be formatted using Markdown, this makes it easier to read and follow what the AI assistant says, and you can also write code in the chat easily.
๐ฅ Streaming responseโ
We should do this from the beginning, but we did not! That was a mistake. Users had to wait for the full message to be received before seeing it on the UI.
From now on, you can start seeing the AI response immediately in the chat. This makes working with AI faster and feels more natural.
๐ฆ Better conversation contextโ
Previously, you may notice that only the context of the last message was preserved in the chat, so if you ask AI something from the beginning and a few messages later, the AI assistant will completely forget about it.
Well, that's no more. Now the full chat history will be preserved during a conversation.
This feature has a downside, though. You will reach the token limit faster! In that case, you can switch to another model, like GPT-3.5 16k.