Model Gallery

Discover and install AI models from our curated collection

3 models available
1 repositories
Documentation

Find Your Perfect Model

Filter by Model Type

Browse by Tags

glm-4.7-flash
**GLM-4.7-Flash** is a 30B-A3B MoE (Model Organism Ensemble) model designed for efficient deployment. It outperforms competitors in benchmarks like AIME 25, GPQA, and τ²-Bench, offering strong accuracy while balancing performance and efficiency. Optimized for lightweight use cases, it supports inference via frameworks like vLLM and SGLang, with detailed deployment instructions in the official repository. Ideal for applications requiring high-quality text generation with minimal resource consumption.

Repository: localai

huihui-glm-4.6v-flash-abliterated
**Huihui-GLM-4.6V-Flash (Abliterated)** A text-based large language model derived from the **zai-org/GLM-4.6V-Flash** base model, featuring reduced safety filters and uncensored capabilities. Designed for text generation, it supports conversational tasks but excludes image processing. **Key Features:** - **Base Model**: GLM-4.6V-Flash (original author: zai-org) - **Quantized Format**: GGUF (optimized for efficiency). - **No Image Support**: Only text-based interactions are enabled. - **Custom Training**: Abliterated to remove restrictive outputs, prioritizing openness over safety. **Important Notes:** - **Risk of Sensitive Content**: Reduced filtering may generate inappropriate or controversial outputs. - **Ethical Use**: Suitable for research or controlled environments; not recommended for public or commercial deployment without caution. - **Legal Responsibility**: Users must ensure compliance with local laws and ethical guidelines. **Use Cases:** - Experimental text generation. - Controlled research environments. - Testing safety filtering mechanisms. *Note: This model is not suitable for production or public-facing applications without thorough review.*

Repository: localai

glm-4.5v-i1
The model in question is a **quantized version** of the **GLM-4.5V** large language model, originally developed by **zai-org**. This repository provides multiple quantized variants of the model, optimized for different trade-offs between size, speed, and quality. The base model, **GLM-4.5V**, is a multilingual (Chinese/English) large language model, and this quantized version is designed for efficient inference on hardware with limited memory. Key features include: - **Quantization options**: IQ2_M, Q2_K, Q4_K_M, IQ3_M, IQ4_XS, etc., with sizes ranging from 43 GB to 96 GB. - **Performance**: Optimized for inference, with some variants (e.g., Q4_K_M) balancing speed and quality. - **Vision support**: The model is a vision model, with mmproj files available in the static repository. - **License**: MIT-licensed. This quantized version is ideal for applications requiring compact, efficient models while retaining most of the original capabilities of the base GLM-4.5V.

Repository: localaiLicense: mit