This setup has been working great for me. The chat and web browser models are both using the Ollama cloud. This is about as good as it gets, I think, without paying the Frontier model pricing. Chat: qwen3.5:cloud Web Browser Model: qwen3-vl:32b-cloud Embedding: sentence-transformers/all-MiniLM-L6-v2
I keep getting this error after I upgraded to v1.5. Can you share how I might be able to troubleshoot this or is this is a bug in 1.5? I am running on a Hostinger VPS with Dokploy and Ubuntu.
Hi guys, I'm just strating with A0 and I wanted to know if there's a way to connected it to the signal cli. Even if it's not a native way inside of agent zero
Does anyone here happen to be a guru with LM Studio, especially when it comes to optimizing different models? Windows (unfortunately ) with an NVIDIA RTX PRO 4500 Blackwell 32 GB GPU. I want to run the best models I can for A0 locally. It's working but I just have a feeling I could optimize LM Studio more.
I have two different instances of A0, one on my desktop and one on a vps. Both are running different models. The desktop is running a local model through Ollama. I see BOTH stalling out and even with nudges, they struggled to move along. Is that normal? I am assuming it is something I am doing.
@Daniel ITwizard Thanks! I started getting LM Studio installed again, but there was something I was doing that was being blocked by LM Studio. Of course, I can't recall what it is. I will look at Try a mixture of experts model like the Qwen3.5-35B-A3B