Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

abhishek 
posted an update 2 days ago
view post
Post
2984
INTRODUCING Hugging Face AutoTrain Client 🔥
Fine-tuning models got even easier!!!!
Now you can fine-tune SOTA models on all compatible dataset-model pairs on Hugging Face Hub using Python on Hugging Face Servers. Choose from a number of GPU flavors, millions of models and dataset pairs and 10+ tasks 🤗

To try, install autotrain-advanced using pip. You can ignore dependencies and install without --no-deps and then you'd need to install some dependencies by hand.

"pip install autotrain-advanced"

Github repo: https://github.com/huggingface/autotrain-advanced
prithivMLmods 
posted an update 1 day ago
view post
Post
1889
FLUX 1.1 [pro] Ultra : API - { 4x Higher Image Resolutions }
Upto - 4 Megapixels, 10 seconds per sample. { Hi - Res }

{ Blog Post ⛵ } : https://huggingface.co/blog/prithivMLmods/flux-pro-endpoint

Endpoint Creation Step by Step: 🧵
-> Sign up to { api.bfl.ml } & get your api's: https://api.bfl.ml/auth/profile
-> File Structure:
flux_image_generation/
├── .env
├── generate_image.py
└── requirements.txt

-> Step 0: Add Your API Key to an Environment File
{ .env }
BFL_API_KEY=your_actual_api_key_here

-> Step 1: Install Required Libraries
{ requirements.txt }
requests
python-dotenv

-> Step 2: Setup the Python Script
{ generate_image.py} - https://github.com/PRITHIVSAKTHIUR/Flux-API/blob/main/generate_image.py

-> Step3: Install the requirements & Run the Script
pip install -r requirements.txt

python generate_image.py

-> Polling: The script polls the API every 0.5 seconds until the image generation result is ready. That's it the script also checks for a successful response after submitting the request.

For more visit:
🔺for script: https://github.com/PRITHIVSAKTHIUR/Flux-API/tree/main
🔺bfl doc: https://docs.bfl.ml/quick_start/gen_image/#__tabbed_1_2

Endpoints for image generation: 🧵
-> /flux-pro-1.1-ultra
-> /flux-pro-1.1
-> /flux-pro
-> /flux-dev

Each ID has 50 free credits available for use, based on the cost per image sample generated by the model.

.
.
.
@prithivMLmods 🤗
aaditya 
posted an update about 20 hours ago
view post
Post
854
Last Week in Medical AI: Top Research Papers/Models 🔥
(November 2 -November 9, 2024)

🏅 Medical AI Paper of the Week:
Exploring Large Language Models for Specialist-level Oncology Care

Medical LLM & Other Models:
- GSCo: Generalist-Specialist AI Collaboration
- PediatricsGPT: Chinese Pediatric Assistant
- MEG: Knowledge-Enhanced Medical QA
- AutoProteinEngine: Multimodal Protein LLM

Frameworks and Methodologies:
- BrainSegFounder: 3D Neuroimage Analysis
- PASSION: Sub-Saharan Dermatology Dataset
- SAM for Lung X-ray Segmentation
- Label Critic: Data-First Approach
- Medprompt Runtime Strategies

Medical LLM Applications:
- CataractBot: Patient Support System
- CheX-GPT: X-ray Report Enhancement
- CardioAI: Cancer Cardiotoxicity Monitor
- HealthQ: Healthcare Conversation Chain
- PRObot: Diabetic Retinopathy Assistant

Medical LLMs & Benchmarks:
- MediQ: Clinical Reasoning Benchmark
- Touchstone: Segmentation Evaluation
- Medical LLM Adaptation Progress
- Fine-Tuning Medical QA Strategies

AI in Healthcare Ethics:
- Healthcare Robotics with LLMs
- XAI in Clinical Practice
- Precision Rehabilitation Framework
- Multimodal AI Challenges

Now you can watch and listen to the latest Medical AI papers daily on our YouTube and Spotify channels as well!

- Full Thread: https://x.com/OpenlifesciAI/status/1855207141302473090
- YouTube: https://youtu.be/ad0uTnYuTo8
- Spotify: https://open.spotify.com/episode/6s39t1UJZk1i10szuXP2qN
prithivMLmods 
posted an update 3 days ago
view post
Post
4287
Quintet Drop : : 🤗

{ Flux LoRA DLC ⛵ } : prithivMLmods/FLUX-LoRA-DLC

-- Purple Dreamy
{ pop of color } : prithivMLmods/Purple-Dreamy-Flux-LoRA

-- Golden Dust
{ shimmer contrast } : prithivMLmods/Golden-Dust-Flux-LoRA

-- Lime Green
{ depth to the composition } : prithivMLmods/Lime-Green-Flux-LoRA

-- Flare Strike
{ Fractured Line } : prithivMLmods/Fractured-Line-Flare

-- Orange Chroma
{ studio lighting } : prithivMLmods/Orange-Chroma-Flux-LoRA
.
.
.
{ collection } : prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

@prithivMLmods
ezgikorkmaz 
posted an update 2 days ago
m-ric 
posted an update 2 days ago
view post
Post
1244
𝗔𝗻𝗱𝗿𝗼𝗶𝗱𝗟𝗮𝗯: 𝗙𝗶𝗿𝘀𝘁 𝗲𝘃𝗲𝗿 𝘀𝘆𝘀𝘁𝗲𝗺𝗮𝘁𝗶𝗰 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝗳𝗼𝗿 𝗔𝗻𝗱𝗿𝗼𝗶𝗱 𝗺𝗼𝗯𝗶𝗹𝗲 𝗮𝗴𝗲𝗻𝘁𝘀 𝘀𝗵𝗼𝘄𝘀 𝘁𝗵𝗮𝘁 𝘀𝗺𝗮𝗹𝗹, 𝗳𝗶𝗻𝗲-𝘁𝘂𝗻𝗲𝗱 𝗼𝗽𝗲𝗻 𝗺𝗼𝗱𝗲𝗹𝘀 𝗰𝗮𝗻 𝗽𝗼𝘄𝗲𝗿 𝗮 𝗝𝗔𝗥𝗩𝗜𝗦 𝘀𝘆𝘀𝘁𝗲𝗺 𝗼𝗻 𝘆𝗼𝘂𝗿 𝘀𝗺𝗮𝗿𝘁𝗽𝗵𝗼𝗻𝗲 📱🔥

A team from Tsinghua University just released AndroidLab, the first systematic framework to evaluate and train Android mobile agents that works with both text-only and multimodal models.

They show that fine-tuning small open-source models can significantly boost performance, matching that of much bigger closed models like GPT-4o.

The team built:

📊 A reproducible benchmark with 138 tasks across 9 apps to evaluate mobile agents systematically

📝📱 A framework supporting both text-only (via XML) and visual (via marked screenshots) interfaces

✅ An instruction dataset of 10.5k operation traces for training mobile agents

Key insights:

- 📈 Fine-tuning improves performance BY A LOT: Open-source model Llama-3.1-8B improves from 2% to 24% success rate after training, nearly reaching GPT-4o performance although it’s much smaller
- ⚙️ Text-only agents match multimodal ones: XML-based agents achieve similar performance to screenshot-based multimodal agents.

Read their paper here 👉 AndroidLab: Training and Systematic Benchmarking of Android Autonomous Agents (2410.24024)
prithivMLmods 
posted an update about 4 hours ago
fuzzy-mittenz 
posted an update 2 days ago
view post
Post
1690
Welcome Newcomers, Did you bring your towel?
  • 3 replies
·
MonsterMMORPG 
posted an update about 10 hours ago
view post
Post
314
How To Use Mochi 1 Open Source Video Generation Model On Your Windows PC, RunPod and Massed Compute

Tutorial Link : https://youtu.be/iqBV7bCbDJY

Mochi 1 from Genmo is the newest state-of-the-art Open Source video generation model that you can use for free on your computer. This model is a breakthrough like the very first Stable Diffusion model but this time it is starting for the video generation models. In this tutorial, I am going to show you how to use Genmo Mochi 1 video generation model on your computer, on windows, locally with the most advanced and very easy to use SwarmUI. SwarmUI as fast as ComfyUI but also as easy as using Automatic1111 Stable Diffusion web UI. Moreover, if you don’t have a powerful GPU to run this model locally, I am going to show you how to use this model on the best cloud providers RunPod and Massed Compute.

🔗 Public Open Access Article Used in Video ⤵️
▶️ https://www.patreon.com/posts/106135985

Amazing Ultra Important Tutorials with Chapters and Manually Written Subtitles / Captions
Stable Diffusion 3.5 Large How To Use Tutorial With Best Configuration and Comparison With FLUX DEV : https://youtu.be/-zOKhoO9a5s

FLUX Full Fine-Tuning / DreamBooth Tutorial That Shows A Lot Info Regarding SwarmUI Latest : https://youtu.be/FvpWy1x5etM

Full FLUX Tutorial — FLUX Beats Midjourney for Real : https://youtu.be/bupRePUOA18

Main Windows SwarmUI Tutorial (Watch To Learn How to Use)

How to install and use. You have to watch this to learn how to use SwarmUI
Has 70 chapters and manually fixed captions : https://youtu.be/HKX8_F1Er_w
singhsidhukuldeep 
posted an update 1 day ago
view post
Post
1061
Thinking about upgrading from Python 3.10 to 3.11? Here's why you should make the move - a deep technical breakdown that might convince you:

>> Performance Revolution
The performance improvements are staggering, with benchmarks showing 10-60% faster execution across different workloads. Let me break down the game-changing features:

>> Core Architecture Changes
Python 3.11's interpreter now uses statically allocated core modules, eliminating the multi-step loading process we've dealt with in 3.10. This means your applications will start 10-15% faster out of the gate.

>> Function Optimization
The redesigned frame objects are a thing of beauty - they've been stripped of unnecessary baggage, resulting in a 3-7% speedup for all function calls. But it gets better: function calls are now inlined, giving us a 1-3% boost, with recursive functions like Fibonacci seeing up to 1.7x improvement!

>> Adaptive Intelligence
The new Specializing Interpreter is perhaps the most exciting addition. Think of it as a lightweight JIT - it identifies hot code paths and optimizes them automatically.

The interpreter now automatically specializes math operations, array indexing, and even sequence unpacking based on actual usage patterns.

>> Exception Handling Revolution
My favorite feature? Zero-cost exceptions! Your try-except blocks no longer carry overhead when no exceptions occur. The code runs at full speed until an exception actually happens.

Ready to make the switch? These improvements aren't just numbers - they're real-world performance gains waiting to be unlocked in your codebase.