The issue is that, by default, Laravel came with DB_HOST as 127.0.0.1
but MySQL 9 will reject that host in favor of localhost
.
If everything else is configured correctly, simply set your DB_HOST
to localhost
, and you should be good to go.
macOS Sequoia introduces new features to help you be more productive and creative on Mac. With the latest Continuity feature, iPhone Mirroring, you can access your entire iPhone on Mac. It’s easy to tile windows to quickly create your ideal workspace, and you can even see what you’re about to share while presenting with Presenter Overlay. A big update to Safari features Distraction Control, Highlights, and a redesigned Reader, making it easy to get things done while you browse the web. macOS Sequoia also brings text effects and emoji Tapbacks to Messages, Math Notes to Calculator, the ability to plan a hike in Maps, and so much more.
iPhone Mirroring sounds extremely useful.
Hits the Upgrade Now button.
reMarkable released the reMarkable Paper Pro, a brand-new device with an 11.8” color display. The latency is down to 12 milliseconds from 21 milliseconds in the older reMarkable 2, which only featured black, white, and grayscale colors. Storage is up to 64GB from 8 GB.
It's a bit pricier than the previous model.
If you, like me, have the reMarkable 2, I don't think it is worth an upgrade. But I'll have to get one on my hands to know for sure.
I enjoyed learning about Disney's sodium vapor background removal process, which is used in movies such as Mary Poppins (1964), Bedknobs and Broomsticks (1971), and Pete’s Dragon (1977).
This method works much better than green and blue chroma keys, but as the video experiment shows, it's much more challenging to achieve.
This site with Machine Learning Challenges (deep-ml.com) looks really promising to learn about foundational concepts.
I recently bought an Atomos Ninja with the Atomos Connect module and an UltraSync Blue to pair devices via Bluetooth.
I couldn't find an updated list of compatible devices, so I started one.
Timecode Systems, the original creator of the UltraSync One and Blue, was acquired by Atomos, which makes monitor recorders such as the Atomos Ninja or Shogun. The UltraSync Blue is also compatible with all other Timecode Systems devices.
Hi Friends—
I'm working to bring you new episodes with John Pierson, Joel Simon, David Andrés León, and other exciting guests.
Today's episode is a follow-up with Andy Payne on Grasshopper 2's new features, recorded live after Andy's episode was released.
Thanks to everyone who chatted with us during the YouTube premiere.
Let us know your thoughts on the video comments.
Submit your questions at gettingsimple.com/ask.
Warmly,
Nono
00:00
· Introduction
00:50
· Grasshopper 2
03:03
· Data types
04:44
· Content Cache component
06:35
· Rhino Compute
07:37
· Object attributes
08:36
· New features
08:51
· Shouts
09:50
· Visual diffing and graphics
10:24
· Figurines
11:33
· Installing Grasshopper 2
12:32
· Andy's day-to-day
13:39
· 3D tools
Hey, you don’t get to decide what spreads—the public does.
—Seth Godin, All Marketers Are Liars
Hi Friends—
Andy Payne is an architect and software developer at McNeel, the company behind Rhino and Grasshopper 3D.
I met Andy in the summer of 2016. Autodesk had acquired Monolith (a voxel-based editor) from Andy and Pan earlier that year. I joined them as an intern to build a generator of 3D-printed material gradients and play with a Zmorph 3D printer.
We recorded a podcast conversation in New Orleans in September 2022, where I learned about Andy's latest adventure.
Enjoy this episode on the origins of Grasshopper, Grasshopper 2, Rhino.Compute, teaching, learning to code, generative AI, open-source code and monetization, and Andy's journey.
Thanks to everyone who chatted with us during the YouTube premiere.
Let us know your thoughts on the video comments.
Submit your questions at gettingsimple.com/ask.
Warmly,
Nono
00:00
· Introduction
00:35
· Andy Payne
04:11
· Grasshopper origins
07:23
· Andy meets Grasshopper
09:19
· Grasshopper Primer
10:26
· Grasshopper 1.0
13:22
· Grasshopper 2
15:11
· Developing Grasshopper
16:59
· New data types
18:57
· Rhino Compute & Hops
22:32
· Cloud billing
27:05
· Teaching
30:07
· Visual programming
36:23
· Open source & monetization
42:03
· McNeel Forum
50:07
· Connect with Andy
51:57
· Learning to code
58:00
· Generative AI
01:02:09
· The IKEA effect
01:05:38
· Authorship
01:08:56
· AI trade-offs
01:12:58
· Panagiotis Michalatos
01:16:02
· Advice for young people
01:17:08
· Success
01:18:35
· $100 or less
01:20:12
· Outro
I have an Apple M3 Max 14-inch MacBook Pro with 64 GB of Unified Memory (RAM) and 16 cores (12 performance and 4 efficiency).
It's awesome that PyTorch now supports Apple Silicon's Metal Performance Shaders (MPS) backend for GPU acceleration, which makes local inference and training much, much faster. For instance, each denoising step of Stable Diffusion XL takes ~2s with the MPS backend and ~20s on the CPU.
In July 2013, Alex Webb asked whether Grasshopper was initially developed as a teaching tool to show how information flowed through commands.
David Rutten denied this.
[Grasshopper] was developed for Rhino customers as a way to automate tasks without the need to write textual code. We expected that some of our users who were interested in RhinoScript or C# or VB.NET would be interested, but we certainly didn't think that it would be taught (at gunpoint apparently in some universities) to the masses.
Originally, the product was called Explicit History1, because it was a different approach to Rhino's native (implicit) history feature. Rhino history is recorded while you model and can then be played back, Grasshopper history is defined from scratch while the model is created as an afterthought.
I found this while putting together the episode notes for a conversation with Andy Payne on the Getting Simple podcast, where he shares curiosities of Grasshopper's origins and its transition from Explicit History to the initial Grasshopper release, Grasshopper 1, and Grasshopper 2.
In the publication, David Rutten adds that Explicit History was initially called Semantic Modeling, "but that never even made it out of the building." ↩
Pulling a mini-essay and sketch weekly is not an easy feat.
I've been doing it consistently for years, only delaying on a few special occasions for reasons like not having an internet connection, being in a different timezone, traveling, and some other situations that make a good enough to myself.
I'll keep pushing, and as my initial intention with this project was, I'll try to schedule more than one post per week to give myself a bit of slack to develop ideas more deeply and put more thoughts before I hit send.
Still, this project is for me to explore, and I'll continue to publish even if ideas aren't complete. There's always the following week to correct or expand on it.
See you soon!
You can join the newsletter here.
I export videos from Descript, which has embedded subtitles, and Descript doesn't have a way to export subtitles by chapter markers; it only exports them for an entire composition.
Here's a command that extracts the embedded subtitles from a given video—and supports any format supported by FFmpeg, such as MP4, MOV, or MKV.
ffmpeg -i video.mp4 -map 0:s:0 subtitles.srt
Here's what each part of the command does.
-i video.mp4
- the input file.map 0:s:0:
- maps the first subtitle track found in the video. (You can change the last digit to extract a different track, e.g., 0:s:1
for the second subtitle track.)subtitles.srt
- the output file name and format, e.g, SRT
or VTT
.If you found this useful, let me know!
In Live 116, I conducted a live work session learning how to fine-tune Stable Diffusion models with Low-Rank Adaptation (LoRA).
If this interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
01:07
· Introduction
01:21
· Today
02:19
· Fine-Tune with LoRA
04:09
· Image Diffusion Slides
06:43
· Fine-Tune with Lora
13:31
· Stable Diffusion & DALL-E
22:27
· Fine-Tuning with Lora
01:34:20
· Outro
Hi! It's Nono. Here are links to things I mentioned in my guest lecture at the Creative Machine Learning Innovation Lab at Berkeley MDes, invited by Kyle Steinfeld on March 15, 2024.
In Live 115, we played with tldraw's 'Draw Fast' experiment, which turns freehand scribbles and shapes into realistic images using the Optimized Latent Consistency (Stable Diffusion v1.5) machine learning model through fal.ai's API.
Thanks to the tldraw team for open-sourcing this experiment. ❤️
If this interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
00:17
· Introduction
02:30
· Today
04:17
· Draw Fast by tldraw
06:15
· Fal AI
07:20
· Hands-On Draw Fast
08:03
· What is Draw Fast?
10:09
· Clone Draw Fast
14:16
· Fal AI
15:04
· Sign Up
16:41
· API Key
20:17
· Pricing
21:55
· DEMO
25:55
· Credits
28:03
· Models
30:57
· DEMO
37:59
· Challenge
41:27
· Break
44:42
· Tldraw React component
49:23
· Draw Fast Code
01:05:50
· Outro
I was sad to see a redirect from Lobe.ai to Lobe's GitHub repository.
Thank you for the support of Lobe! The team has loved seeing what the community built with our application and appreciated your interest and feedback. We wanted to share with you that the Lobe desktop application is no longer under development.
The Lobe team open-sourced a lot of their tooling to use Lobe-trained models on the web, or with Python, .NET, and other platforms. Yet the Lobe app and website were never open-sourced, which means they will no longer be usable when they cease to work.
Before it's gone, you can access Lobe's site and download the latest app at aka.ms/DownloadLobe.
Lobe takes a new humane approach to machine learning by putting your images in the foreground and receding to the background, serving as the main bridge between your ideas and your machine learning model.
Lobe also simplifies the process of machine learning into three easy steps. Collect and label your images. Train and understand your results. Then play with your model and improve it.
I'd invite you to listen to my conversation with Adam Menges on the origins of Lobe.
In its policy update of February 21, 2024, PayPal announced that it will exclude NFTs from eligibility for the buyer protection program and limit the protection of sold NFTs to ten thousand dollars at the moment of the transaction.
We are revising PayPal’s Buyer Protection Program to exclude Non-Fungible Tokens (NFTs) from eligibility [and the] Seller Protection Program to exclude from eligibility Non-Fungible Tokens (NFTs) with a transaction amount of $10,000.01 USD or above (or equivalent value in local currency as calculated at the time of the transaction); $10,000.00 USD or below (or equivalent value in local currency as calculated at the time of the transaction), unless the buyer claims it was an Unauthorised Transaction and the transaction meets all other eligibility requirements.
The crypto world seems to be the perfect place for fraudulent and counterfeit transactions, as scammers request money through digital wallets, which are often hard to trace and have no protection, instead of using traditional banks.
This policy update comes right after Cent NFT blocked NFT sales and the UK authorities seized, for the first time, three NFTs.
How to run Google Gemma 2B- and 7B-parameter instruct models locally on the CPU and the GPU on Apple Silicon Macs.
In Live 113, we ran Google's Gemma LLM 2B- and 7B-parameter open models on an Apple Silicon Mac, both on the CPU and the GPU.
We downloaded the Instruct models with the Hugging Face CLI and used PyTorch with Hugging Face's Transformers and Accelerate Python packages to run Gemma locally.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
01:23
· Introduction
02:46
· Previously
03:11
· Today
03:45
· Elgato Prompter
06:19
· Interlude
06:43
· Google Gemma 2B & 7B
08:45
· Overview
11:59
· Hugging Face CLI
14:01
· CLI Install
14:54
· CLI Login
15:33
· Download Gemma
22:19
· Run Gemma Locally
24:49
· Anaconda Environment
29:00
· Gemma on the CPU
52:56
· Apple Silicon GPUs
55:32
· List Torch Silicon MPS Device
56:50
· Gemma on Apple Silicon GPUs
01:08:16
· Sync Samples to Git
01:17:22
· Thumbnail
01:28:42
· Links
01:31:12
· Chapters
01:36:28
· Outro
Performance Max campaigns serve across all of Google’s ad inventory, unlocking more opportunities for you to connect with customers.
[…]
[Google announced] several new features to help you scale and build high-quality assets — including bringing Gemini models into Performance Max.
[…]
Better Ad Strength and more ways to help you create engaging assets.
[A]dvertisers that use asset generation when creating a Performance Max campaign are 63% more likely to publish a campaign with Good or Excellent Ad Strength.
In Live 112, we did a hands-on example of how to deploy a web app with Vercel.
We used Yarn Modern (4.1.0) to create, develop, and build a Vite app that uses React, SWC & TypeScript, pushed the app to GitHub, and import the Git repository into a Vercel deployment, which then re-builds and deploys on every code change.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
00:16
· Introduction
01:58
· Previously
02:26
· Today
05:21
· Diffusion Models for Visual Computing
10:07
· LGM
11:21
· Interlude
12:53
· Vite, React & TypeScript Apps with Yarn Modern
17:20
· Create the App
24:29
· Push to Git
29:07
· Deploy to Vercel
33:40
· Edit the App
42:53
· YouTube Channel
45:23
· Draw Fast
46:25
· Markers
47:51
· Elgato Prompter
48:27
· Markers
51:45
· Outro
I kept seeing this error when creating a new Yarn Modern project—setting Yarn to the latest version with yarn set version stable
—and using the yarn init
command.
Usage Error: The nearest package directory doesn't seem part of the project declared in […]
For instance, yarn add -D @types/node
wouldn't work.
There was a package.json
and a yarn.lock
file in my home directory.
Removing the file fixed the issue.
rm ~/package.json ~/yarn.lock
Then, in any directory, even subfolders of your home directory (~
) you can create new yarn projects.
mkdir app && cd app
yarn init -y
yarn add -D @types/node
When you run yarn set version stable
, Yarn Modern creates a package.json
with the packageManager
property set to the latest stable version of Yarn, such as 4.1.0
.
To avoid the above issue, you should first create your project.
yarn create vite my-app --template react-swc-ts
Then, enter the app's directory and only then set the desired Yarn version.
cd my-app
yarn set version stable
yarn
# Yarn Modern will be used here.
After I put my M1 MacBook Pro to sleep for a couple of weeks, it woke up with the wrong date and time, set to around two years before the current date.
Here's what fixed it for me.
Open a Terminal window and run the following command.
sudo sntp -sS time.apple.com
This will trigger a time sync with the actual date and time from Apple's clock. But it won't completely fix the issue. If you close the lid and reopen your laptop, the time will return to the wrong time.
After you run the command above, you have to do the following.
That's it. This permanently fixed the issue for me.
If you found this useful, let me know!
You can directly assign new properties.
(window as any).talk = () => { console.log(`Hello!`) }
You can extend the Window
interface with typings and then assign the property values.
declare global {
interface Window {
talk: () => void
concat: (words: string[]) => string
}
}
window.talk = () => { console.log(`Hello!`) }
window.concat = (words: string[]) => {
return words.join(`, `)
}
In Live 111, I showed a few tools I've recently discovered.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next time!
00:11
· Introduction
02:34
· Previously
03:54
· Password Managers
06:45
· Notion
07:57
· Animations with Lottielab
13:33
· Animations with Linearity Move
17:31
· Visual Electric: Generative AI Images
21:32
· Break
23:25
· Visual Electric
26:27
· Future Topics
27:03
· Outro
In an email to Apple Podcasts Connect, Apple announced today they will be displaying transcripts of podcasts in Apple Podcasts, which will be generated by them.
To make podcasts accessible to more users, we’re adding transcripts to Apple Podcasts. Listeners will be able to read along while your podcast plays or access a transcript from your episode page.
Apple Podcasts will start to automatically create and display transcripts for your shows, or you can provide your own. Learn more about our newest feature.
This is a great feature which, if their transcripts are accurate enough, may render useless the countless hours of manual transcription or manual AI-transcript editing to make sure the produced transcripts are accurate.
I wondered whether Apple would allow podcast creator edits when AI gets parts of the transcripts wrong. The answer is yes; transcripts can be provided via podcasts' RSS feeds or uploaded to Apple Podcasts.
Apple will automatically generate episode transcripts for your show. You can also provide your own transcripts through RSS or upload. Displaying transcripts will make your podcast more accessible.
Apple lists two options. "Only display auto-generated transcripts by Apple" and "Display transcripts I provide, or auto-generated transcripts by Apple if one isn't provided."
If you choose to provide your own transcripts, we will ingest them using the RSS transcript tag. […] All transcripts are subject to quality standards. Files that do not meet standards will not be displayed.
You can download and edit transcripts in Apple Podcasts Connect, make changes, and link the new file to your RSS feed.
For this, Apple has added the new tag <podcast:transcript>
to specify "a link to the episode transcript in the Closed Caption format. Apple Podcasts will prefer VTT format over SRT format if multiple instances are included."
You should use this tag when you have a valid transcript file available for users to read. Specify the link to your transcript in the url attribute of the tag. A valid type attribute is also requried. Learn more about
namespace RSS tags on the Github repository. Options for displaying transcripts are available in Apple Podcasts Connect for each show.
I'm looking forward to this amazing feature. It will make the vast catalog of existing podcasts more accessible.
In Live 110, I continued looking at Apple's MLX framework.
Watch this stream to learn how to run MLX code in Python and generate text with Mistral 7B on Apple silicon.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
00:17
· Introduction
02:35
· Today
04:35
· Apple MLX
06:40
· mlx
08:24
· mlx-data
09:55
· mlx-examples
10:43
· MLX Community in HuggingFace
13:40
· M1 Pro with MLX?
15:43
· mlx-lm Troubleshoot
26:19
· mlx-lm Solution
31:57
· Lazy Evaluation
34:09
· Indexing Arrays
39:48
· Generative Image Control
40:48
· Instruct Pix2Pix
45:21
· ControlNet Depth
52:47
· LLMs in MLX with Mistral 7B
Yesterday, Cron announced this is "the final chapter of Cron Calendar and the beginning of Notion Calendar."
I've been a heavy Notion user for years to organize my projects. But I needed to find a good workflow to use Calendars. Notion Calendar to the rescue; this is a great step to surface tasks and pages from Notion databases in my desktop and mobile calendar. Everything would be complete if my main calendar tool were Google Calendar, but I use Apple Calendar.
The last piece missing for me is displaying Apple Calendars in Notion Calendar or the other way around—showing Notion Calendars in Apple Calendar.