hugging face – TheNewsHub https://thenewshub.in Mon, 02 Dec 2024 11:34:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Hugging Face Introduces Open-Source SmolVLM Vision Language Model Focused on Efficiency https://thenewshub.in/2024/12/02/hugging-face-introduces-open-source-smolvlm-vision-language-model-focused-on-efficiency/ https://thenewshub.in/2024/12/02/hugging-face-introduces-open-source-smolvlm-vision-language-model-focused-on-efficiency/?noamp=mobile#respond Mon, 02 Dec 2024 11:34:18 +0000 https://thenewshub.in/2024/12/02/hugging-face-introduces-open-source-smolvlm-vision-language-model-focused-on-efficiency/

Hugging Face, the artificial intelligence (AI) and machine learning (ML) platform, introduced a new vision-focused AI model last week. Dubbed SmolVLM (where VLM is an acronym for vision language model), it is a compact-sized model that is focused on efficiency. The company claims that due to its smaller size and high efficiency, it can be useful for enterprises and AI enthusiasts who want AI capabilities without investing a lot in its infrastructure. Hugging Face has also open-sourced the SmolVLM vision model under the Apache 2.0 license for both personal and commercial usage.

Hugging Face Introduces SmolVLM

In a blog post, Hugging Face detailed the new open-source vision model. The company called the AI model “state-of-the-art” for its efficient usage of memory and fast inference. Highlighting the usefulness of a small vision model, the company noted the recent trend of AI firms scaling down models to make them more efficient and cost-effective.

Small vision model ecosystem
Photo Credit: Hugging Face

The SmolVLM family has three AI model variants, each with two billion parameters. The first is SmolVLM-Base, which is the standard model. Apart from this, SmolVLM-Synthetic is the fine-tuned variant trained on synthetic data (data generated by AI or computer), and SmolVLM Instruct is the instruction variant that can be used to build end-user-centric applications.

Coming to technical details, the vision model can operate with just 5.02GB of GPU RAM, which is significantly lower than Qwen2-VL 2B’s requirement of 13.7GB of GPU RAM and InternVL2 2B’s 10.52GB of GPU RAM. Due to this, Hugging Face claims that the AI model can run on-device on a laptop.

SmolVLM can accept a sequence of text and images in any order and analyse them to generate responses to user queries. It encodes 384 x 384p resolution image patches to 81 visual data tokens. The company claimed that this enables the AI to encode test prompts and a single image in 1,200 tokens, as opposed to the 16,000 tokens required by Qwen2-VL.

With these specifications, Hugging Face highlights that SmolVLM can be easily used by smaller enterprises and AI enthusiasts and be deployed to localised systems without the tech stack requiring a major upgrade. Enterprises will also be able to run the AI model for text and image-based inferences without incurring significant costs.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Vivo X200, Vivo X200 Pro Tipped to Go Official in India in December Second Week; Sale Date Leaked



BRICS’ Move to Introduce Digital Assets Platform for De-Dollarisation Sparks Criticism from Trump



]]>
https://thenewshub.in/2024/12/02/hugging-face-introduces-open-source-smolvlm-vision-language-model-focused-on-efficiency/feed/ 0
Alibaba Researchers Unveil Marco-o1 AI Model As Another Reasoning-Focused Competitor to OpenAI’s o1 https://thenewshub.in/2024/12/02/alibaba-researchers-unveil-marco-o1-ai-model-as-another-reasoning-focused-competitor-to-openais-o1/ https://thenewshub.in/2024/12/02/alibaba-researchers-unveil-marco-o1-ai-model-as-another-reasoning-focused-competitor-to-openais-o1/?noamp=mobile#respond Mon, 02 Dec 2024 10:49:17 +0000 https://thenewshub.in/2024/12/02/alibaba-researchers-unveil-marco-o1-ai-model-as-another-reasoning-focused-competitor-to-openais-o1/

Alibaba recently introduced a reasoning-focused artificial intelligence (AI) model dubbed Marco-o1. The model is similar to the QwQ-32B large language model, which is also optimised for tasks requiring advanced reasoning capabilities, however, one important distinction is that the Marco-o1 is a smaller model and is distilled from the Qwen2-7B-Instruct model. The Chinese tech giant claimed that several fine-tuning exercises have been used to make the new model reasoning-focused. Additionally, the researchers highlighted that it is optimised for complex real-world problem-solving tasks.

Alibaba Marco-o1 AI Model

The new AI model is detailed in a research paper published on arXiv, an online pre-print journal. Notably, the papers published in the online journal are not peer-reviewed. Additionally, Alibaba has also hosted the AI model on Hugging Face and has permitted downloading and using it for personal and commercial use cases under the Apache 2.0 licence.

However, it is not fully open-sourced as only the partial dataset has been made available. As such, users will not be able to replicate the model or break it down to analyse the architecture or components.

Coming to Marco-o1, it is fine-tuned from the Qwen2-7B-Instruct foundation model. In the paper, the researchers highlighted that the AI model is powered by chain-of-thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), reflection mechanisms, and other reasoning strategies.

As a result, Alibaba’s Marco-o1 can solve open-ended questions and find queries to responses “where clear standards are absent and rewards are challenging to quantify.” However, it should be understood that the advanced reasoning abilities have not come from any hardware or architectural advancement.

Instead, all reasoning models today use a technique called test-time compute that lets an AI model spend more processing time on a single query. This allows them to test out different theories to find the solution and fact-check themselves. As a result, these models are geared towards providing more accurate responses and completing complex tasks. One important area where Marco-o1 excels, as per the researchers, is understanding colloquial nuances and translating slang expressions.

One limitation of the AI model, as per the researchers, claimed that while Marco-o1 shows reasoning characteristics, “its performance still falls short of a fully realised” reasoning model.

]]>
https://thenewshub.in/2024/12/02/alibaba-researchers-unveil-marco-o1-ai-model-as-another-reasoning-focused-competitor-to-openais-o1/feed/ 0