As you read this, a multitude of digital therapists powered by artificial intelligence might be blundering more than they are healing. A recent study conducted by Stanford University poses serious questions about the efficacy and safety of AI-empowered therapy chatbots. The researchers warn that such bots may not only respond inappropriately or dangerously to patients but also stigmatize users with particular mental health conditions.
Unsettling findings emerged when the study scrutinized five chatbots designed to offer accessible therapy. These bots were assessed based on criteria that determine what makes an excellent human therapist. The worrying conclusion was that, rather than help, AI chatbots may play an involuntary role in substantiating delusional or conspiratorial thought patterns.
Sakana AI, a futuristic lab from Japan, has pulled back the curtains on an innovative technique involving the co-operative effort of multiple large language models (LLMs). This method results in an AI ‘dream team’ capable of tackling a common task more efficiently than any individual model, surpassing single model performance by about 30%. The unveiling of the method, named Multi-LLM AB-MCTS, has sent ripples across the world of AI and enterprise.
This trailblazing technique presents an intriguing prospect for building more robust and adaptable AI systems. It essentially allows businesses and enterprises to shake off the dependency on a solitary model or provider. In grounds-breaking progress, it allows a dynamic utilization of the best elements of different frontier models based on the demands of the task, thereby delivering superior results.