Activity
[update] move the main_vlm model config file
[update] move the main_vlm model config file
[update] Avoid using the same tokenizer service port for LLM and VLLM…
[update] Avoid using the same tokenizer service port for LLM and VLLM…
[update] Added logic to stop inference before unit exit
[update] Added logic to stop inference before unit exit
[update] llm add Multi-turn conversation
[update] llm add Multi-turn conversation
[update] Use dynamically allocated tokenizer service port instead
[update] Use dynamically allocated tokenizer service port instead
[fix] Fix python tokenizer server cannot exit
[fix] Fix python tokenizer server cannot exit
Merge branch 'dev' of github.com:m5stack/StackFlow into dev
Merge branch 'dev' of github.com:m5stack/StackFlow into dev
[fix] Fix mode_config & tokenizer_.py PATH
[fix] Fix mode_config & tokenizer_.py PATH
[fix] Fix the bug that token was truncated incorrectly
[fix] Fix the bug that token was truncated incorrectly
[update] del tokenizer && remove deb name
[update] del tokenizer && remove deb name
Merge branch 'dev' of github.com:m5stack/StackFlow into dev
Merge branch 'dev' of github.com:m5stack/StackFlow into dev
[update] update internVL2.5-1B version
[update] update internVL2.5-1B version
[update] upload tokenizer_deepseek-r1-1.5B-ax630c.py
[update] upload tokenizer_deepseek-r1-1.5B-ax630c.py
[update] updte mode_deepseek-r1-1.5B-ax630c.json
[update] updte mode_deepseek-r1-1.5B-ax630c.json