Meta‘s new Muse Spark may be pitched as a smarter AI model, but based on early testing, it sounds like the kind of AI you really do not want anywhere near serious medical decisions.
The recent WIRED report talked about the experience with Muse Spark. Meta’s health-focused AI model inside the Meta AI app did not show promising results. The chatbot reportedly encouraged users to upload raw medical information like lab reports, glucose monitor readings, and blood pressure logs, then offered to help analyze patterns and trends.
All of this sounds pretty useful till you realize two immediate concerns. You’re handing over very sensitive data, and whether the AI is even remotely trustworthy enough to interpret it.
What went wrong in the early tests?
The first problem is kind of hard to ignore. In a day and age where your life already feels too transparent, Muse Spark is prying even further. It isn’t unexpected to give out the necessary information for an accurate diagnosis, but handing over your personal health records to a chatbot for advice doesn’t sound like a privacy risk.
Unlike data shared with a doctor or hospital, information entered into a chatbot does not automatically come with the same expectations or protections people may assume are in place. This isn’t a professionally vetted opinion, and that’s what makes the idea shaky. The AI is being presented as a helpful tool, but the environment around it still looks much closer to a consumer product than a proper medical one.

This isn’t even the worst part
Aside from the typical privacy risks involved when sharing personal data with any tech giant, you’d at least expect to get a serviceable answer. But the more serious problem appeared to be with the quality of the advice. In WIRED’s testing, the chatbot reportedly generated an extremely low-calorie meal plan after being asked about weight loss and aggressive intermittent fasting.
While the bot did flag some of the risks along this route, a warning does not mean much if the model then goes on to help the user do the dangerous thing anyway. This is where the real issue lies with a lot of AI health tools right now. They can sound cautious, informed, and seem balanced right up until the moment they start reinforcing bad assumptions. That polished tone can offer the wrong advice with confidence, which makes failure more dangerous.
