In Live 116, I conducted a live work session learning how to fine-tune Stable Diffusion models with Low-Rank Adaptation (LoRA).
If this interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
01:07
· Introduction
01:21
· Today
02:19
· Fine-Tune with LoRA
04:09
· Image Diffusion Slides
06:43
· Fine-Tune with Lora
13:31
· Stable Diffusion & DALL-E
22:27
· Fine-Tuning with Lora
01:34:20
· Outro
In Live 115, we played with tldraw's 'Draw Fast' experiment, which turns freehand scribbles and shapes into realistic images using the Optimized Latent Consistency (Stable Diffusion v1.5) machine learning model through fal.ai's API.
Thanks to the tldraw team for open-sourcing this experiment. ❤️
If this interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
00:17
· Introduction
02:30
· Today
04:17
· Draw Fast by tldraw
06:15
· Fal AI
07:20
· Hands-On Draw Fast
08:03
· What is Draw Fast?
10:09
· Clone Draw Fast
14:16
· Fal AI
15:04
· Sign Up
16:41
· API Key
20:17
· Pricing
21:55
· DEMO
25:55
· Credits
28:03
· Models
30:57
· DEMO
37:59
· Challenge
41:27
· Break
44:42
· Tldraw React component
49:23
· Draw Fast Code
01:05:50
· Outro
In Live 113, we ran Google's Gemma LLM 2B- and 7B-parameter open models on an Apple Silicon Mac, both on the CPU and the GPU.
We downloaded the Instruct models with the Hugging Face CLI and used PyTorch with Hugging Face's Transformers and Accelerate Python packages to run Gemma locally.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
01:23
· Introduction
02:46
· Previously
03:11
· Today
03:45
· Elgato Prompter
06:19
· Interlude
06:43
· Google Gemma 2B & 7B
08:45
· Overview
11:59
· Hugging Face CLI
14:01
· CLI Install
14:54
· CLI Login
15:33
· Download Gemma
22:19
· Run Gemma Locally
24:49
· Anaconda Environment
29:00
· Gemma on the CPU
52:56
· Apple Silicon GPUs
55:32
· List Torch Silicon MPS Device
56:50
· Gemma on Apple Silicon GPUs
01:08:16
· Sync Samples to Git
01:17:22
· Thumbnail
01:28:42
· Links
01:31:12
· Chapters
01:36:28
· Outro
In Live 112, we did a hands-on example of how to deploy a web app with Vercel.
We used Yarn Modern (4.1.0) to create, develop, and build a Vite app that uses React, SWC & TypeScript, pushed the app to GitHub, and import the Git repository into a Vercel deployment, which then re-builds and deploys on every code change.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
00:16
· Introduction
01:58
· Previously
02:26
· Today
05:21
· Diffusion Models for Visual Computing
10:07
· LGM
11:21
· Interlude
12:53
· Vite, React & TypeScript Apps with Yarn Modern
17:20
· Create the App
24:29
· Push to Git
29:07
· Deploy to Vercel
33:40
· Edit the App
42:53
· YouTube Channel
45:23
· Draw Fast
46:25
· Markers
47:51
· Elgato Prompter
48:27
· Markers
51:45
· Outro
In Live 111, I showed a few tools I've recently discovered.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
00:11
· Introduction
02:34
· Previously
03:54
· Password Managers
06:45
· Notion
07:57
· Animations with Lottielab
13:33
· Animations with Linearity Move
17:31
· Visual Electric: Generative AI Images
21:32
· Break
23:25
· Visual Electric
26:27
· Future Topics
27:03
· Outro
In Live 110, I continued looking at Apple's MLX framework.
Watch this stream to learn how to run MLX code in Python and generate text with Mistral 7B on Apple silicon.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
00:17
· Introduction
02:35
· Today
04:35
· Apple MLX
06:40
· mlx
08:24
· mlx-data
09:55
· mlx-examples
10:43
· MLX Community in HuggingFace
13:40
· M1 Pro with MLX?
15:43
· mlx-lm Troubleshoot
26:19
· mlx-lm Solution
31:57
· Lazy Evaluation
34:09
· Indexing Arrays
39:48
· Generative Image Control
40:48
· Instruct Pix2Pix
45:21
· ControlNet Depth
52:47
· LLMs in MLX with Mistral 7B
In Live 109, I used Apple's MLX for the first time—an array framework for Apple Silicon.
Watch this stream to learn how to create a Python environment for MLX, run MLX code in Python, the role of unified memory in MLX, and generate images with Stable Diffusion on Apple silicon.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
00:15
· Introduction
02:09
· Today
03:55
· Topics
07:18
· Farrago
08:16
· File Organization
10:09
· Descript AI
19:33
· Intro to MLX
22:03
· Installation
23:57
· Python Environment
36:59
· Troubleshooting
45:35
· Break
48:57
· The Issue
50:29
· Python Environment
52:53
· Quick Start
57:39
· Unified Memory
01:11:49
· MLX Samples
01:14:33
· Stable Diffusion with MLX
01:28:16
· Outro
In Live 108, I talked about my new machine, a 14-inch MacBook Pro with the M3 Max Apple silicon chip, 16 cores, 64GB of unified memory, and 1TB of SSD storage; I shared an overview of my streaming and recording setup, going over how I create markers for my videos; and demoed how to run TypeScript and tsx files with Deno and how to compile programs to executables than can run standalone.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
00:25
· Introduction
02:01
· Today
03:05
· New MacBook Pro M3 Max
11:19
· Streaming Workflow
19:35
· OBS Mask for reMarkable Sharing Screen
36:33
· Streaming Workflow
01:08:39
· Marker Cleanup
01:11:47
· Ideas to Build in 2024
01:14:49
· Deno: Run TypeScript and tsx
01:21:23
· Outro
Hi Friends!
I'm hosting a live conversation today with special guests to celebrate my 100th YouTube live stream, Thursday, April 27, at 10:30 AM Pacific Time.
I've invited Adam Menges (ex-Lobe.ai), Joel Simon (Artbreeder), Jose Luis Garcia del Castillo (Harvard, ParametricCamp), and Kyle Steinfeld (University of California, Berkeley) to pick their brains on creative machine intelligence and how it's being used in academia and next-generation design tools.
The conversation will take place in Riverside at nono.ma/live/100.
With that link, you'll join as part of the audience and can participate in the chat. There's an option to "call in" and join the call, which we could use for questions or even to have everyone who wants to join at the end of the call.
Feel free to forward this invite to friends interested in AI & ML.
Thanks so much for being part of my journey.
Warmly,
Nono
In Live 90, we connected via SSH to a Raspberry Pi and took some photos with the Pi Camera, and then trained YOLOv7 on a dataset of hand sketches and detected drawings, text, and arrows from several pages of one of my sketchbooks.
You can spread the word by liking and sharing this tweet.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
In Live 89, we saw an overview of TensorFlow Signatures and did a hands-on demo to implement them as well as to understand Python decorators.
You can spread the word by liking and sharing this tweet.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
In Live 88, we worked on the Note Parser side project, worked through TipTap's documentation to create custom extensions, and did a live Getting Simple Q&A.
You can spread the word by liking and sharing this tweet.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
In Live 87, we saw an overview of OpenAI's Image API, which lets you interact with DALL-E 2 programmatically to generate and edit images and request image variations. You can go to OpenAI's API to sign up, read the documentation, and generate API keys.
You can spread the word by liking and sharing this tweet.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
I recently got access to OpenAI's DALL·E 21 text-to-image beta. In short, this AI system can generate images from text prompts, create semantic local edits by selecting an image region and altering your prompt text, or generate variations of uploaded or generated images.
I've been playing with it and will be doing a live demo today in Live 79 and sharing some of my experiments and thoughts on this tool, which is quite impressive.
You can now propose live stream and podcast topics at topics.nono.ma. Suggest a topic and explain why it would be interesting to cover it on a YouTube live stream or the Getting Simple podcast.
In Live 70—I learned about YouTube's Upload Python API, which allows us to programmatically upload high-resolution videos to YouTube from a client application or the command-line interface.
Among other things, we learned how to create a Google Cloud Platform project and create API keys and OAuth 2.0 credentials to authenticate custom applications to different Google APIs. We also saw how to restrict your client id and secret to a specific IP address during testing and how to scope our application to certain API calls (say, uploading videos to YouTube or changing video thumbnails).
Use the timestamps below to jump to specific parts of the stream.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
In yesterday's live stream—Live 68—we took a look at the linear regression following Aurélien Géron's Hands-On Machine Learning book. We saw the normal equation, how to calculate the dot product of two matrices and the inverse of a matrix, and saw a few different ways in which, with code, you can solve a linear equation, plus how to do it with Sci-Kit Learn's LinearRegression
class.
You can take a look at the Linear Regression Colab notebook.
Use the timestamps below to jump to specific parts of the stream.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!