Model Additions

Share your feedback...

Integration of Aleph Alpha Models
Hello Dify Community, I'd like to propose the integration of a new model provider, Aleph Alpha, to enhance Dify's capabilities with German large language models. Is this request related to a challenge you are facing? Yes, I am developing an application for a customer who requires the integration of a German AI solution. Currently, Dify does not support any German large language model providers, which is a significant limitation. What is the feature you'd like to see? I suggest adding support for Aleph Alpha's models, including Luminous-base, Luminous-extended, Luminous-supreme, and Luminous-control models. These models cover a range of capabilities from fast and cost-effective solutions to more complex tasks like creative text writing and zero-shot performance. Aleph Alpha provides an OpenAI API specification and a Python client, which could facilitate integration. How will this feature improve your workflow / experience? The integration of Aleph Alpha models would allow me to utilize Dify apps for customers who prefer German AI solutions. It would also benefit the broader Dify user community by providing access to a versatile range of use cases supported by these models. Additional Context or Comments Aleph Alpha's API specification is available at , and their Python client can be found at . These resources should aid in the technical integration with Dify. Can you help with this feature? I am willing to assist by learning how the Aleph Alpha API works, reporting bugs, and providing feedback on both cloud and self-hosted versions of Dify. I am available for help on the Discord server and open to a voice chat to discuss Aleph Alpha in more detail. I believe this feature will significantly expand Dify's language model offerings and look forward to contributing to its development.
under review
Voice Generation Integration with Elevenlabs (
Hello Dify Team, I'm excited to propose a new feature for Dify's chat functionality: the Voice Generation Integration with Elevenlabs ( ). This feature will enable Dify to generate voice responses, enhancing user experience and communication efficiency. Key Points: Integration of Elevenlabs ( ) : - Add a new model type for voice models and implement as the first model provider. - Allow users to define the API key and select the voice model (e.g. "Eleven Multilingual V2") from . Voice Generation Feature in Dify App : - Introduce the "Voice Generation" feature in Dify app configuration, requiring the definition of a voice ID and settings for stability, similarity boost, style, and use speaker boost. Voice Chat Mode : Activated via an icon in Dify Chat, transitioning users into a voice conversation environment. Responses are streamed sentence by sentence, creating a seamless and natural dialogue flow. Elevenlabs metadata from streaming response can be used to sync generated text with voice playback allowing to show the LLM response while the voice response is playing. Improved User Experience : - Implement the handling of generating voice from LLM responses by streaming audio responses while the LLM is still generating, enabling faster responses. - Enable the configuration of the minimum amount of words/sentences to be generated before triggering an response. - Store generated audio to a buffer, allowing the dify chat client to stream the voice response from the buffer in sync with the LLM response playback. Future Enhancements: Integration of a real-time voice chat feature, leveraging the existing voice input feature to facilitate voice-to-voice conversations. AI-Powered Speech Recognition: To automatically detect when a user begins and has finished speaking via voice activity detector (VAD) and resume listening after a response. This would allow having a permanent-listening dify app in background and chatting with a llm without having to have the chat window focussed. This feature will significantly improve the communication experience within Dify, providing users with the ability to engage in voice-based interactions and enhancing overall user satisfaction. I look forward to your consideration and feedback on this future feature suggestion!
LM Studio as Model Provider for Dify
Hello Dify Community, I'm excited to propose a new feature for Dify: the integration of LM Studio as a model provider. This feature would allow users to seamlessly use LM Studio's API for hosting and serving LLMs within their self-hosted Dify environment. Key Features: LM Studio API Integration: Users can host LM Studio locally and integrate it as a model provider, allowing for seamless interaction with LLMs through Dify. How Users Interact: Users will have the ability to seamlessly connect and utilize LLMs hosted on LM Studio through Dify, allowing for easy integration and usage within the Dify environment. Benefits for Dify Users: Self-Hosting Flexibility: Dify users who self-host the platform can now provide their own LLMs through LM Studio, enabling the use of fine-tuned models based on a small open foundation model. Integration Use Cases: Effortless Model Hosting: Dify users can quickly and easily set up LM Studio to host their own AI models, providing a seamless and efficient process for utilizing custom models within the Dify environment. Private Dify Environment: Self-hosted Dify instances can utilize LM Studio for local inference serving, ensuring compliance with security concerns and e.g. enabling AI usage within internal employee tools. This integration with LM Studio as a model provider would significantly enhance the flexibility and autonomy of Dify users in managing their AI models and providing secure local AI solutions. I welcome your thoughts and feedback on this idea!