AI & ML interests

None defined yet.

Recent Activity

multimodalartΒ 
posted an update 4 months ago
view post
Post
16877
Want to iterate on a Hugging Face Space with an LLM?

Now you can easily convert any HF entire repo (Model, Dataset or Space) to a text file and feed it to a language model!

multimodalart/repo2txt
multimodalartΒ 
posted an update 8 months ago
view post
Post
18135
Self-Forcing - a real-time video distilled model from Wan 2.1 by @adobe is out, and they open sourced it 🐐

I've built a live real time demo on Spaces πŸ“ΉπŸ’¨

multimodalart/self-forcing
Β·
akhaliqΒ 
posted an update about 1 year ago
view post
Post
50843
Google drops Gemini 2.0 Flash Thinking

a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more

now available in anychat, try it out: https://huggingface.co/spaces/akhaliq/anychat
Β·
akhaliqΒ 
posted an update about 1 year ago
akhaliqΒ 
posted an update about 1 year ago
akhaliqΒ 
posted an update about 1 year ago
multimodalartΒ 
posted an update over 1 year ago
akhaliqΒ 
posted an update over 1 year ago
view post
Post
21214
Phased Consistency Model

Phased Consistency Model (2405.18407)

The consistency model (CM) has recently made significant progress in accelerating the generation of diffusion models. However, its application to high-resolution, text-conditioned image generation in the latent space (a.k.a., LCM) remains unsatisfactory. In this paper, we identify three key flaws in the current design of LCM. We investigate the reasons behind these limitations and propose the Phased Consistency Model (PCM), which generalizes the design space and addresses all identified limitations. Our evaluations demonstrate that PCM significantly outperforms LCM across 1--16 step generation settings. While PCM is specifically designed for multi-step refinement, it achieves even superior or comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show that PCM's methodology is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator.