Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

ginipick 
posted an update 2 days ago
view post
Post
5171
🚀 FLUX Workflow Canvas

Welcome to Workflow Canvas, your ultimate AI-driven platform for crafting stunning design concepts and intricate workflow diagrams that empower your business! 🤖✨

ginigen/Workflow-Canvas

Features
Product Design 🛠️
Transform your ideas into reality with sleek, industrial product designs that blend modern aesthetics with advanced technology.

Mindmap 🧠
Generate vibrant, educational mind maps that outline your strategies and processes in a clear, visually engaging layout.

Mockup 📱
Quickly prototype intuitive app interfaces and web designs using clean, hand-drawn wireframes that capture your vision.

Infographic 📊
Build polished, data-rich infographics that communicate complex corporate metrics and trends with style and clarity.

Diagram 📈
Illustrate comprehensive, end-to-end business workflows—from market analysis to implementation—with detailed and organized diagrams.

Flowchart 🔄
Design easy-to-follow, hand-drawn style flowcharts that map out your operational processes using vibrant colors and minimalistic icons.

How It Works
Set Your Parameters:
Customize your creative process by adjusting the seed, dimensions, inference steps, and guidance scale through the intuitive sidebar.

Choose Your Visual Style:
Explore our diverse range of tabs—from Product Design and Mindmap to Flowchart—each tailored to a unique creative output.

Get Inspired:
Dive into our rich library of example prompts featuring detailed lists and tree structures to instantly populate your design ideas.

Generate Your Masterpiece:
Click the “Generate” button and watch as your ideas come to life in beautifully rendered images! 🎨

Experience the fusion of art and technology with Workflow Canvas – where your business ideas transform into dynamic, visual masterpieces. Get started today and revolutionize the way you design! 🚀
fdaudens 
posted an update 1 day ago
prithivMLmods 
posted an update 1 day ago
view post
Post
2251
Dino: The Minimalist Multipurpose Chat System 🌠
Agent-Dino : prithivMLmods/Agent-Dino

By default, it performs the following tasks:
{Text-to-Text Generation}, {Image-Text-Text Generation}
@image: Generates an image using Stable Diffusion xL.
@3d: Generates a 3D mesh.
@web: Web search agents.
@rAgent: Initiates a reasoning chain using Llama mode for coding explanations.
@tts1-♀, @tts2-♂: Voice generation (Female and Male voices).
AdinaY 
posted an update 1 day ago
view post
Post
2352
🚀 StepFun阶跃星辰 is making BIG open moves!

Last year, their GOT-OCR 2.0 took the community by storm 🔥but many didn’t know they were also building some amazing models. Now, they’ve just dropped something huge on the hub!

📺 Step-Video-T2V: a 30B bilingual open video model that generates 204 frames (8-10s) at 540P resolution with high information density & consistency.
stepfun-ai/stepvideo-t2v

🔊 Step-Audio-TTS-3B : a TTS trained with the LLM-Chat paradigm on a large synthetic dataset, capable of generating RAP & Humming
stepfun-ai/step-audio-67b33accf45735bb21131b0b
  • 1 reply
·
burtenshaw 
posted an update about 10 hours ago
view post
Post
1055
AGENTS + FINETUNING! This week Hugging Face learn has a whole pathway on finetuning for agentic applications. You can follow these two courses to get knowledge on levelling up your agent game beyond prompts:

1️⃣ New Supervised Fine-tuning unit in the NLP Course https://huggingface.co/learn/nlp-course/en/chapter11/1
2️⃣New Finetuning for agents bonus module in the Agents Course https://huggingface.co/learn/agents-course/bonus-unit1/introduction

Fine-tuning will squeeze everything out of your model for how you’re using it, more than any prompt.
  • 1 reply
·
clem 
posted an update 1 day ago
view post
Post
1528
What are the best organizations to follow on @huggingface ?

On top of my head:
- Deepseek (35,000 followers): https://huggingface.co/deepseek-ai
- Meta Llama (27,000 followers): https://huggingface.co/meta-llama
- Black Forrest Labs (11,000 followers): https://huggingface.co/black-forest-labs
- OpenAI (5,000 followers): https://huggingface.co/openai
- Nvidia (16,000 followers): https://huggingface.co/nvidia
- MIcrosoft (9,000 followers): https://huggingface.co/microsoft
- AllenAI (2,000 followers): https://huggingface.co/allenai
- Mistral (5,000 followers): https://huggingface.co/mistralai
- XAI (600 followers): https://huggingface.co/xai-org
- Stability AI (16,000 followers): https://huggingface.co/stabilityai
- Qwen (16,000 followers): https://huggingface.co/Qwen
- GoogleAI (8,000 followers): https://huggingface.co/google
- Unsloth (3,000 followers): https://huggingface.co/unsloth
- Bria AI (4,000 followers): https://huggingface.co/briaai
- NousResearch (1,300 followers): https://huggingface.co/NousResearch

Bonus, the agent course org with 17,000 followers: https://huggingface.co/agents-course
  • 1 reply
·
sayakpaul 
posted an update 2 days ago
view post
Post
2431
Inference-time scaling meets Flux.1-Dev (and others) 🔥

Presenting a simple re-implementation of "Inference-time scaling diffusion models beyond denoising steps" by Ma et al.

I did the simplest random search strategy, but results can potentially be improved with better-guided search methods.

Supports Gemini 2 Flash & Qwen2.5 as verifiers for "LLMGrading" 🤗

The steps are simple:

For each round:

1> Starting by sampling 2 starting noises with different seeds.
2> Score the generations w.r.t a metric.
3> Obtain the best generation from the current round.

If you have more compute budget, go to the next search round. Scale the noise pool (2 ** search_round) and repeat 1 - 3.

This constitutes the random search method as done in the paper by Google DeepMind.

Code, more results, and a bunch of other stuff are in the repository. Check it out here: https://github.com/sayakpaul/tt-scale-flux/ 🤗
AdinaY 
posted an update 1 day ago
as-cle-bert 
posted an update 1 day ago
view post
Post
1481
I built an AI agent app in less than 8 hours🤯
And, believe me, this is 𝗻𝗼𝘁 clickbait❌

GitHub 👉 https://github.com/AstraBert/PapersChat
Demo 👉 as-cle-bert/PapersChat

The app is called 𝐏𝐚𝐩𝐞𝐫𝐬𝐂𝐡𝐚𝐭, and it is aimed at 𝗺𝗮𝗸𝗶𝗻𝗴 𝗰𝗵𝗮𝘁𝘁𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝘀𝗰𝗶𝗲𝗻𝘁𝗶𝗳𝗶𝗰 𝗽𝗮𝗽𝗲𝗿𝘀 𝗲𝗮𝘀𝗶𝗲𝗿.

𝐇𝐞𝐫𝐞 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐭𝐡𝐞 𝐚𝐩𝐩 𝐝𝐨𝐞𝐬:

📄 Parses the papers that you upload thanks to LlamaIndex🦙 (either with LlamaParse or with simpler, local methods)
📄 Embeds documents both with a sparse and with a dense encoder to enable hybrid search
📄 Uploads the embeddings to Qdrant
⚙️ Activates an Agent based on mistralai/Mistral-Small-24B-Instruct-2501 that will reply to your prompt
🧠 Retrieves information relevant to your question from the documents
🧠 If no relevant information is found, it searches PubMed and arXiv databases
🧠 Returns a grounded answer to your prompt

𝐇𝐨𝐰 𝐝𝐢𝐝 𝐈 𝐦𝐚𝐧𝐚𝐠𝐞 𝐭𝐨 𝐦𝐚𝐤𝐞 𝐭𝐡𝐢𝐬 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐢𝐧 𝟖 𝐡𝐨𝐮𝐫𝐬?

Three key points:

- LlamaIndex🦙 provides countless integrations with LLM providers, text embedding models and vectorstore services, and takes care of the internal architecture of the Agent. You just plug it in, and it works!🔌⚡
- Qdrant is a vector database service extremely easy to set up and use: you just need a one-line Docker command😉
- Gradio makes frontend development painless and fast, while still providing modern and responsive interfaces🏗️

And a bonus point:

- Deploying the demo app couldn't be easier if you use Gradio-based Hugging Face Spaces🤗

So, no more excuses: build your own AI agent today and do it fast, (almost) for free and effortlessly🚀

And if you need a starting point, the code for PapersChat is open and fully reproducible on GitHub 👉 https://github.com/AstraBert/PapersChat
m-ric 
posted an update 1 day ago
view post
Post
1309
Less is More for Reasoning (LIMO): a 32B model fine-tuned with 817 examples can beat o1-preview on math reasoning! 🤯

Do we really need o1's huge RL procedure to see reasoning emerge? It seems not.
Researchers from Shanghai Jiaotong University just demonstrated that carefully selected examples can boost math performance in large language models using SFT —no huge datasets or RL procedures needed.

Their procedure allows Qwen2.5-32B-Instruct to jump from 6.5% to 57% on AIME and from 59% to 95% on MATH, while using only 1% of the data in previous approaches.

⚡ The Less-is-More Reasoning Hypothesis:
‣ Minimal but precise examples that showcase optimal reasoning patterns matter more than sheer quantity
‣ Pre-training knowledge plus sufficient computational resources at inference levels up math skills

➡️ Core techniques:
‣ High-quality reasoning chains with self-verification steps
‣ 817 handpicked problems that encourage deeper reasoning
‣ Enough inference-time computation to allow extended reasoning

💪 Efficiency gains:
‣ Only 817 examples instead of 100k+
‣ 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data

This really challenges the notion that SFT leads to memorization rather than generalization! And opens up reasoning to GPU-poor researchers 🚀

Read the full paper here 👉  LIMO: Less is More for Reasoning (2502.03387)