Weiguo Liao
Weiguo
AI & ML interests
None yet
Organizations
None yet
how can I install mlx-lm version 0.28.4.
1
#3 opened 2 months ago
by
Weiguo
mlx-lm not ready
👍
5
12
#1 opened 4 months ago
by
Weiguo
which llama.cpp version needed?
1
#1 opened 5 months ago
by
Weiguo
Add support for flash-attention2
2
#3 opened over 2 years ago
by
shigureui
有没有人写一个stream_chat,现在的体验有点差
🤝
👍
2
11
#6 opened over 2 years ago
by
Weiguo
貌似很拉跨,一个7B的模型3090显存都不够载入,要是不安装它推荐的加速包,速度慢的像狗。
15
#12 opened over 2 years ago
by
boxter007
BF16是不是依赖CUDA 11.7,我的机器是12.2
3
#7 opened over 2 years ago
by
Weiguo
有没有人写一个stream_chat,现在的体验有点差
🤝
👍
2
11
#6 opened over 2 years ago
by
Weiguo
BF16是不是依赖CUDA 11.7,我的机器是12.2
3
#7 opened over 2 years ago
by
Weiguo
有没有人写一个stream_chat,现在的体验有点差
🤝
👍
2
11
#6 opened over 2 years ago
by
Weiguo