Skip to main content

Β· 2 min read
Huy Tran

Big news! ChatUML just got a major update, bringing exciting new features and a fresh look to boost your productivity.

🦜 Floating Chat Box, More Space​

We’ve redesigned the interface with a floating chat box that you can move around freely, giving you more space to work on your diagrams without interruptions.

Floating Chat Box

πŸ–ΌοΈ Prefer Sketching? Just Paste an Image!​

Sometimes the best ideas come from brainstorming sessions around a whiteboard. Don’t erase your sketches when you’re doneβ€”snap a photo and paste it directly into ChatUML! The AI can turn your whiteboard notes into C4 diagrams or any other type of diagram you need.

You can paste image to the chat!

You can now paste up to three images into the chat box and let AI work its magic!

Please keep in mind that all the attached images will only be used as a reference for the chat message you send. We will not store them on our server or display them in the conversation.

Have content stored elsewhere but don’t want to paste it all into the chat? Now, you can simply share a link! ChatUML will pull in the content from the link so you can easily reference it in your conversation.

Currently, we support text-based content, and PDF support is in the works. We’ll have it ready for you soon!

Β· One min read
Huy Tran

We are excited to announce that starting today, all users, including FREE users, will have access to GPT-4o Mini, the new and powerful model from OpenAI that outperforms GPT-4.

What This Means for You:

  • Improved Performance: Enjoy faster and more accurate diagram creation and editing, thanks to the advanced capabilities of GPT-4o Mini.
  • Enhanced User Experience: With the superior understanding and processing power of GPT-4o Mini, ChatUML becomes even more intuitive and user-friendly.
  • Increased Accessibility: Now, every user, whether on a free or paid plan, can leverage the full potential of this cutting-edge AI technology.

Upgrade your diagramming experience with ChatUML and GPT-4o Mini today. Dive into the future of intelligent diagram editing and see the difference for yourself!

Β· One min read
Huy Tran

We have a quick and exciting announcement!

Starting today, we will be replacing the GPT-4-Turbo model with the new GPT-4o model. This is the most advanced model, with a whoppingly large context window of 128k tokens and stronger reasoning ability.

The new model is now available for all users.

Also, for users who purchased the Pro Package, we are increasing your message limit from 50 to 500 messages per diagram!

Β· 2 min read
Huy Tran

It's been a while since the last update. Over the past few months, we've been working hard to improve the product and the experience for our users and constantly release new updates, but it's time for an update post.

πŸš€ New models and faster response​

We've been rewriting the whole chat streaming backend to make it faster and more stable.

Our model selection has been updated with new models, including GPT-3.5 Turbo 16k and GPT-4. Here's the full list of models we're supporting:

  • GPT-3.5 Turbo 16k: The default model for all users is now GPT-3.5 Turbo 16k, which supports 16k tokens, replacing the old 4k tokens model. That mean bigger and more complex diagrams.
  • GPT-4: The most capable model, very good at logical reasoning and creativity. This model has the context window of 8k tokens.
  • GPT-4 Turbo 128k: GPT-4 with the context window of 128k tokens. This model is still in Preview mode, so it may not be stable, and will be rate limited.

πŸ“– New documentation site​

We've also released a new documentation site at docs.chatuml.com. This site will be the home for all the tutorials and guides for ChatUML.

🀫 One more thing...​

A few months back, we celebrated our 5000th users. As of today, we just surpassed 150,000 users πŸŽ‰. We're so happy to see the community growing and we're so grateful for all the support from our users.

As a thank you, we have a little gift for all users. Use the code FRIENDS150 to get 30% off when purchasing any package. This code is valid until 11:59 PM Feb 29, 2024 (PST), and can only be applied once per user.

Β· 2 min read
Huy Tran

First of all, I want to thank everyone for your love and support of our product.

When we opened up GPT-4 access to all users, everyone jumped in and tried it out. The usage during the past week has skyrocketed. There is no word that can describe our joy and appreciation to all the users.

Of course, the operation bill also skyrocketed. As a small team, this is a huge cost for us to handle, so we had to make a hard decision to limit access to GPT-4 model.

The GPT-4 model will only be accessible to users with 100 credits and up.

If you've selected GPT-4, but your current credit is lower than 100, it will automatically fall back to the GPT-3.5 Turbo 16k.

This change will not affect users who already purchased the Unlimited package. You can still access GPT-4 models as usual.

While we understand that any price adjustment might cause concern, we assure you that this decision was well-considered and necessary for us to maintain the exceptional level of service you have come to expect from us. The new pricing structure will enable us to continuously improve our offerings, enhance customer support, and ensure your experience remains exceptional.

Please don't hesitate to reach out if you have any questions or concerns.

Best regards,

ChatUML Team

Β· One min read
Huy Tran

Hello August! 🍁

Today, we landed the biggest update ever for ChatUML!

πŸ“ Markdown support in chat​

From now, all chat messages will be formatted using Markdown, this makes it easier to read and follow what the AI assistant says, and you can also write code in the chat easily.

πŸ”₯ Streaming response​

We should do this from the beginning, but we did not! That was a mistake. Users had to wait for the full message to be received before seeing it on the UI.

Chat streaming!

From now on, you can start seeing the AI response immediately in the chat. This makes working with AI faster and feels more natural.

🦜 Better conversation context​

Previously, you may notice that only the context of the last message was preserved in the chat, so if you ask AI something from the beginning and a few messages later, the AI assistant will completely forget about it.

Well, that's no more. Now the full chat history will be preserved during a conversation.

This feature has a downside, though. You will reach the token limit faster! In that case, you can switch to another model, like GPT-3.5 16k.

Β· One min read
Huy Tran

We're happy to announce that ChatUML now supports GPT-4, the most powerful and capable model provided by OpenAI.

You can use custom model with ChatUML

GPT-4 support comes with better logical reasoning, which means you can generate more high-quality and sophisticated diagrams.

You can also select the different models on the Settings page, currently, we're supporting 3 models:

  • GPT-3.5 Turbo: The original one. Costs 1 star per request.
  • GPT-3.5 Turbo 16k: Same as above but supports 16k tokens, that mean bigger and more complex diagrams. Costs 2 stars per request.
  • GPT-4: The most capable model, very good at logical reasoning and creativity. Costs 4 stars per request.
You can use custom model with ChatUML

Β· One min read
Huy Tran

Today, we're rolling out password-based login for all users.

If you still have an active login session, you can create a password on the Settings page.

Set a new password

If you previously signed up for an account at ChatUML with the magic link, you can still log in by ticking the "I don't have a password" box on the login page.

Login with magic link

Since the magic link login feature will be deprecated soon, setting your password as soon as possible is recommended to avoid losing access to your account.

For newly signed-up users, you will be using password by default.

Β· 2 min read
Huy Tran

πŸŽ‰ Celebrating 5000th user!​

ChatUML started at the beginning of April 2023. And today, we have reached 5000 users! Since the Product Hunt launch on May 25th, we have seen a lot of users sign up every day. While I enjoyed the user growth rate, I'm sure OpenAI also enjoyed it because my bill keeps going up! 🀣

Anyway, thank you so much, our beloved users, for all of your support! If you love the product, consider follow us on Product Hunt!

⚑ The AI generator just got faster!​

Since launch, there has been a mistake in the code that makes the API response soooooo slow. On average, it would take ~20s for an AI request to complete. Over this weekend, I have found and fixed it. Now you can enjoy faster API response. I will continue to monitor this issue closely, and more improvements will come in the next few weeks.

πŸ•ΆοΈ Dark mode add-on users​

ChatUML does not have native dark mode support (yet). But for some users, when using dark mode add-ons like DarkReader, you will notice a crazy white dotted grid in the background. This is due to the fact that the dark mode add-ons could not handle the color properly in the SVG background.

The issue has been fixed too. Now you can use ChatUML in the dark without hurting your eyes!

Better dark mode

The diagram will sometimes have some color issues in this mode. Don't worry. We're working on better support for dark mode in the coming releases.

πŸ’¬ Multi-line messages​

One thing that frustrated our users most was the inability to type multiline messages in the chat. Well, that's no more. Now you can type Shift + Enter to enter a new line in the chat.

This makes it easier to type code into the chat messages. Oh, btw. Have you ever tried asking ChatUML to explain some code?


That's it for this week. I hope you like the product, and please feel free to reach out with your feedback on ChatUML! (or say hello!) at hello@chatuml.com