This weekend in Generative Media
OpenAI courts Hollywood; Stability AI CEO resigns; Photography is no longer evidence
OpenAI Courts Hollywood in Meetings With Film Studios, Directors (Bloomberg)
Stability AI CEO resigns to ‘pursue decentralized AI’ (The Verge) Stability AI, Makers of Stable Diffusion, Could Be in Huge Trouble (PetaPixel) Stability AI Announcement (Stability)
Arizona newsletter makes Kari Lake deepfake (The Hill)
Financial Times tests an AI chatbot trained on decades of its own articles (The Verge)
How AI Can Help Us End Design Education Anachronisms (Arch Daily)
Create Visual Stories with Consistent Characters and Scenes using Generative AI (Katalyst)
Create breath-taking videos with AI (PixVerse)
Controllable Video Generation: Powered by JST-1, the first video-3d foundation model with actual physics understanding, starting from making any character move as you want. (Viggle.AI) Made this Viggle dialogue test in about 20 minutes on my mobile in the cab ride home from work. (X)
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors (HuggingFace)
LightIt: Illumination Modeling and Control for Diffusion Models (CVPR 2024, project page)
CMD: Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition (ICLR 2024, project page)
Explorative Inbetweening of Time and Space (project page)
OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models (project page)
Diffusion Models are Geometry Critics: Single Image 3D Editing Using Pre-Trained Diffusion Priors (project page)
You Only Sample Once: Taming One-Step Text-To-Image Synthesis by Self-Cooperative Diffusion GANs (arXiv)

