APRIL 30, 2024

Andy Payne — Grasshopper, Rhino Compute, Teaching, Learning to Code & Gen AI

Hi Friends—

Andy Payne is an architect and software developer at McNeel, the company behind Rhino and Grasshopper 3D.

I met Andy in the summer of 2016. Autodesk had acquired Monolith (a voxel-based editor) from Andy and Pan earlier that year. I joined them as an intern to build a generator of 3D-printed material gradients and play with a Zmorph 3D printer.

We recorded a podcast conversation in New Orleans in September 2022, where I learned about Andy's latest adventure.

Enjoy this episode on the origins of Grasshopper, Grasshopper 2, Rhino.Compute, teaching, learning to code, generative AI, open-source code and monetization, and Andy's journey.

Thanks to everyone who chatted with us during the YouTube premiere.

Let us know your thoughts on the video comments.
Submit your questions at


Nono Martínez Alonso and Andy Payne in New Orleans.


00:00 · Introduction
00:35 · Andy Payne
04:11 · Grasshopper origins
07:23 · Andy meets Grasshopper
09:19 · Grasshopper Primer
10:26 · Grasshopper 1.0
13:22 · Grasshopper 2
15:11 · Developing Grasshopper
16:59 · New data types
18:57 · Rhino Compute & Hops
22:32 · Cloud billing
27:05 · Teaching
30:07 · Visual programming
36:23 · Open source & monetization
42:03 · McNeel Forum
50:07 · Connect with Andy
51:57 · Learning to code
58:00 · Generative AI
01:02:09 · The IKEA effect
01:05:38 · Authorship
01:08:56 · AI trade-offs
01:12:58 · Panagiotis Michalatos
01:16:02 · Advice for young people
01:17:08 · Success
01:18:35 · $100 or less
01:20:12 · Outro

APRIL 26, 2024

I have an Apple M3 Max 14-inch MacBook Pro with 64 GB of Unified Memory (RAM) and 16 cores (12 performance and 4 efficiency).

It's awesome that PyTorch now supports Apple Silicon's Metal Performance Shaders (MPS) backend for GPU acceleration, which makes local inference and training much, much faster. For instance, each denoising step of Stable Diffusion XL takes ~2s with the MPS backend and ~20s on the CPU.

APRIL 24, 2024

In July 2013, Alex Webb asked whether Grasshopper was initially developed as a teaching tool to show how information flowed through commands.

David Rutten denied this.

[Grasshopper] was developed for Rhino customers as a way to automate tasks without the need to write textual code. We expected that some of our users who were interested in RhinoScript or C# or VB.NET would be interested, but we certainly didn't think that it would be taught (at gunpoint apparently in some universities) to the masses.

Originally, the product was called Explicit History1, because it was a different approach to Rhino's native (implicit) history feature. Rhino history is recorded while you model and can then be played back, Grasshopper history is defined from scratch while the model is created as an afterthought.

I found this while putting together the episode notes for a conversation with Andy Payne on the Getting Simple podcast, where he shares curiosities of Grasshopper's origins and its transition from Explicit History to the initial Grasshopper release, Grasshopper 1, and Grasshopper 2.

  1. In the publication, David Rutten adds that Explicit History was initially called Semantic Modeling, "but that never even made it out of the building." 

APRIL 15, 2024

Pulling a mini-essay and sketch weekly is not an easy feat.

I've been doing it consistently for years, only delaying on a few special occasions for reasons like not having an internet connection, being in a different timezone, traveling, and some other situations that make a good enough to myself.

I'll keep pushing, and as my initial intention with this project was, I'll try to schedule more than one post per week to give myself a bit of slack to develop ideas more deeply and put more thoughts before I hit send.

Still, this project is for me to explore, and I'll continue to publish even if ideas aren't complete. There's always the following week to correct or expand on it.

See you soon!

You can join the newsletter here.

MARCH 25, 2024

I export videos from Descript, which has embedded subtitles, and Descript doesn't have a way to export subtitles by chapter markers; it only exports them for an entire composition.

Here's a command that extracts the embedded subtitles from a given video—and supports any format supported by FFmpeg, such as MP4, MOV, or MKV.

ffmpeg -i video.mp4 -map 0:s:0

Here's what each part of the command does.

  • -i video.mp4 - the input file.
  • map 0:s:0: - maps the first subtitle track found in the video. (You can change the last digit to extract a different track, e.g., 0:s:1 for the second subtitle track.)
  • - the output file name and format, e.g, SRT or VTT.

If you found this useful, let me know!

MARCH 22, 2024

In Live 116, I conducted a live work session learning how to fine-tune Stable Diffusion models with Low-Rank Adaptation (LoRA).

If this interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next time!


01:07 · Introduction
01:21 · Today
02:19 · Fine-Tune with LoRA
04:09 · Image Diffusion Slides
06:43 · Fine-Tune with Lora
13:31 · Stable Diffusion & DALL-E
22:27 · Fine-Tuning with Lora
01:34:20 · Outro

MARCH 15, 2024

Drawing, by Hand & Machine — Berkeley MDes

Hi! It's Nono. Here are links to things I mentioned in my guest lecture at the Creative Machine Learning Innovation Lab at Berkeley MDes, invited by Kyle Steinfeld on March 15, 2024.

🔗 Links

🎙 Podcast Conversations

🗣 Talks

🐦 Nono, Elsewhere

MARCH 14, 2024

In Live 115, we played with tldraw's 'Draw Fast' experiment, which turns freehand scribbles and shapes into realistic images using the Optimized Latent Consistency (Stable Diffusion v1.5) machine learning model through's API.

Thanks to the tldraw team for open-sourcing this experiment. ❤️

If this interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next time!


00:17 · Introduction
02:30 · Today
04:17 · Draw Fast by tldraw
06:15 · Fal AI
07:20 · Hands-On Draw Fast
08:03 · What is Draw Fast?
10:09 · Clone Draw Fast
14:16 · Fal AI
15:04 · Sign Up
16:41 · API Key
20:17 · Pricing
21:55 · DEMO
25:55 · Credits
28:03 · Models
30:57 · DEMO
37:59 · Challenge
41:27 · Break
44:42 · Tldraw React component
49:23 · Draw Fast Code
01:05:50 · Outro

MARCH 13, 2024

I was sad to see a redirect from to Lobe's GitHub repository.

Thank you for the support of Lobe! The team has loved seeing what the community built with our application and appreciated your interest and feedback. We wanted to share with you that the Lobe desktop application is no longer under development.

The Lobe team open-sourced a lot of their tooling to use Lobe-trained models on the web, or with Python, .NET, and other platforms. Yet the Lobe app and website were never open-sourced, which means they will no longer be usable when they cease to work.

Before it's gone, you can access Lobe's site and download the latest app at

Lobe takes a new humane approach to machine learning by putting your images in the foreground and receding to the background, serving as the main bridge between your ideas and your machine learning model.

Lobe also simplifies the process of machine learning into three easy steps. Collect and label your images. Train and understand your results. Then play with your model and improve it.

I'd invite you to listen to my conversation with Adam Menges on the origins of Lobe.

MARCH 11, 2024

In its policy update of February 21, 2024, PayPal announced that it will exclude NFTs from eligibility for the buyer protection program and limit the protection of sold NFTs to ten thousand dollars at the moment of the transaction.

We are revising PayPal’s Buyer Protection Program to exclude Non-Fungible Tokens (NFTs) from eligibility [and the] Seller Protection Program to exclude from eligibility Non-Fungible Tokens (NFTs) with a transaction amount of $10,000.01 USD or above (or equivalent value in local currency as calculated at the time of the transaction); $10,000.00 USD or below (or equivalent value in local currency as calculated at the time of the transaction), unless the buyer claims it was an Unauthorised Transaction and the transaction meets all other eligibility requirements.

The crypto world seems to be the perfect place for fraudulent and counterfeit transactions, as scammers request money through digital wallets, which are often hard to trace and have no protection, instead of using traditional banks.

This policy update comes right after Cent NFT blocked NFT sales and the UK authorities seized, for the first time, three NFTs.

FEBRUARY 26, 2024

How to run Google Gemma 2B- and 7B-parameter instruct models locally on the CPU and the GPU on Apple Silicon Macs.

See transcript ›

FEBRUARY 23, 2024

In Live 113, we ran Google's Gemma LLM 2B- and 7B-parameter open models on an Apple Silicon Mac, both on the CPU and the GPU.

We downloaded the Instruct models with the Hugging Face CLI and used PyTorch with Hugging Face's Transformers and Accelerate Python packages to run Gemma locally.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next time!


01:23 · Introduction
02:46 · Previously
03:11 · Today
03:45 · Elgato Prompter
06:19 · Interlude
06:43 · Google Gemma 2B & 7B
08:45 · Overview
11:59 · Hugging Face CLI
14:01 · CLI Install
14:54 · CLI Login
15:33 · Download Gemma
22:19 · Run Gemma Locally
24:49 · Anaconda Environment
29:00 · Gemma on the CPU
52:56 · Apple Silicon GPUs
55:32 · List Torch Silicon MPS Device
56:50 · Gemma on Apple Silicon GPUs
01:08:16 · Sync Samples to Git
01:17:22 · Thumbnail
01:28:42 · Links
01:31:12 · Chapters
01:36:28 · Outro

FEBRUARY 22, 2024

Performance Max campaigns serve across all of Google’s ad inventory, unlocking more opportunities for you to connect with customers.


[Google announced] several new features to help you scale and build high-quality assets — including bringing Gemini models into Performance Max.


Better Ad Strength and more ways to help you create engaging assets.

[A]dvertisers that use asset generation when creating a Performance Max campaign are 63% more likely to publish a campaign with Good or Excellent Ad Strength.

FEBRUARY 16, 2024

In Live 112, we did a hands-on example of how to deploy a web app with Vercel.

We used Yarn Modern (4.1.0) to create, develop, and build a Vite app that uses React, SWC & TypeScript, pushed the app to GitHub, and import the Git repository into a Vercel deployment, which then re-builds and deploys on every code change.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next time!


00:16 · Introduction
01:58 · Previously
02:26 · Today
05:21 · Diffusion Models for Visual Computing
10:07 · LGM
11:21 · Interlude
12:53 · Vite, React & TypeScript Apps with Yarn Modern
17:20 · Create the App
24:29 · Push to Git
29:07 · Deploy to Vercel
33:40 · Edit the App
42:53 · YouTube Channel
45:23 · Draw Fast
46:25 · Markers
47:51 · Elgato Prompter
48:27 · Markers
51:45 · Outro

FEBRUARY 14, 2024

I kept seeing this error when creating a new Yarn Modern project—setting Yarn to the latest version with yarn set version stable—and using the yarn init command.

Usage Error: The nearest package directory doesn't seem part of the project declared in […]

For instance, yarn add -D @types/node wouldn't work.

The fix

There was a package.json and a yarn.lock file in my home directory.

Removing the file fixed the issue.

rm ~/package.json ~/yarn.lock

Create a Yarn Modern project

Then, in any directory, even subfolders of your home directory (~) you can create new yarn projects.

mkdir app && cd app
yarn init -y
yarn add -D @types/node

Doing It Right

When you run yarn set version stable, Yarn Modern creates a package.json with the packageManager property set to the latest stable version of Yarn, such as 4.1.0.

To avoid the above issue, you should first create your project.

yarn create vite my-app --template react-swc-ts

Then, enter the app's directory and only then set the desired Yarn version.

cd my-app
yarn set version stable
# Yarn Modern will be used here.

FEBRUARY 12, 2024

After I put my M1 MacBook Pro to sleep for a couple of weeks, it woke up with the wrong date and time, set to around two years before the current date.

Here's what fixed it for me.

Open a Terminal window and run the following command.

sudo sntp -sS

This will trigger a time sync with the actual date and time from Apple's clock. But it won't completely fix the issue. If you close the lid and reopen your laptop, the time will return to the wrong time.

After you run the command above, you have to do the following.

  • Open System SettingsGeneralDate & Time
  • Disable Set time and date automatically
  • Enable Set time and delete automatically

That's it. This permanently fixed the issue for me.

If you found this useful, let me know!

FEBRUARY 8, 2024

You can directly assign new properties.

(window as any).talk = () => { console.log(`Hello!`) }

You can extend the Window interface with typings and then assign the property values.

declare global {
  interface Window {
    talk: () => void
    concat: (words: string[]) => string
} = () => { console.log(`Hello!`) }
window.concat = (words: string[]) => {
  return words.join(`, `)

FEBRUARY 2, 2024

In Live 111, I showed a few tools I've recently discovered.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next time!


00:11 · Introduction
02:34 · Previously
03:54 · Password Managers
06:45 · Notion
07:57 · Animations with Lottielab
13:33 · Animations with Linearity Move
17:31 · Visual Electric: Generative AI Images
21:32 · Break
23:25 · Visual Electric
26:27 · Future Topics
27:03 · Outro

JANUARY 26, 2024

In an email to Apple Podcasts Connect, Apple announced today they will be displaying transcripts of podcasts in Apple Podcasts, which will be generated by them.

To make podcasts accessible to more users, we’re adding transcripts to Apple Podcasts. Listeners will be able to read along while your podcast plays or access a transcript from your episode page.

Apple Podcasts will start to automatically create and display transcripts for your shows, or you can provide your own. Learn more about our newest feature.

This is a great feature which, if their transcripts are accurate enough, may render useless the countless hours of manual transcription or manual AI-transcript editing to make sure the produced transcripts are accurate.

I wondered whether Apple would allow podcast creator edits when AI gets parts of the transcripts wrong. The answer is yes; transcripts can be provided via podcasts' RSS feeds or uploaded to Apple Podcasts.

Apple will automatically generate episode transcripts for your show. You can also provide your own transcripts through RSS or upload. Displaying transcripts will make your podcast more accessible.

Apple lists two options. "Only display auto-generated transcripts by Apple" and "Display transcripts I provide, or auto-generated transcripts by Apple if one isn't provided."

If you choose to provide your own transcripts, we will ingest them using the RSS transcript tag. […] All transcripts are subject to quality standards. Files that do not meet standards will not be displayed.

You can download and edit transcripts in Apple Podcasts Connect, make changes, and link the new file to your RSS feed. For this, Apple has added the new tag <podcast:transcript> to specify "a link to the episode transcript in the Closed Caption format. Apple Podcasts will prefer VTT format over SRT format if multiple instances are included."

You should use this tag when you have a valid transcript file available for users to read. Specify the link to your transcript in the url attribute of the tag. A valid type attribute is also requried. Learn more about namespace RSS tags on the Github repository.

Options for displaying transcripts are available in Apple Podcasts Connect for each show.

I'm looking forward to this amazing feature. It will make the vast catalog of existing podcasts more accessible.

JANUARY 24, 2024

In Live 110, I continued looking at Apple's MLX framework.

Watch this stream to learn how to run MLX code in Python and generate text with Mistral 7B on Apple silicon.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next week!


00:17 · Introduction
02:35 · Today
04:35 · Apple MLX
06:40 · mlx
08:24 · mlx-data
09:55 · mlx-examples
10:43 · MLX Community in HuggingFace
13:40 · M1 Pro with MLX?
15:43 · mlx-lm Troubleshoot
26:19 · mlx-lm Solution
31:57 · Lazy Evaluation
34:09 · Indexing Arrays
39:48 · Generative Image Control
40:48 · Instruct Pix2Pix
45:21 · ControlNet Depth
52:47 · LLMs in MLX with Mistral 7B

JANUARY 19, 2024

Yesterday, Cron announced this is "the final chapter of Cron Calendar and the beginning of Notion Calendar."

I've been a heavy Notion user for years to organize my projects. But I needed to find a good workflow to use Calendars. Notion Calendar to the rescue; this is a great step to surface tasks and pages from Notion databases in my desktop and mobile calendar. Everything would be complete if my main calendar tool were Google Calendar, but I use Apple Calendar.

The last piece missing for me is displaying Apple Calendars in Notion Calendar or the other way around—showing Notion Calendars in Apple Calendar.

JANUARY 18, 2024

First, you must install Rust in your machine, which comes with Cargo. (Installing Rust with rustc will also install Cargo.)

Create a new package

cargo new my_package

Build and run your package

cd my_package
cargo build
cargo run

Adding dependencies

Edit Cargo.toml and add your package dependencies.

// Cargo.toml
base64 = "0.21.7"

You can browse Cargo packages at

See how to compile and run Rust programs without Cargo.

JANUARY 17, 2024

Here are two simple Rust programs and how to compile and run them in macOS with rustc.

The simplest Rust program

fn main() {
    println!("Hello, World!");

You can compile this program with rustc.


Then run it.

# Hello, World!

A program that counts words in files concurrently

I generated this program with ChatGPT and then modified it.

use std::fs::File;
use std::io::{self, BufRead};
use std::path::Path;
use std::thread;

fn count_words_in_file(file_path: &Path) -> io::Result<usize> {
    let file = File::open(file_path)?;
    let reader = io::BufReader::new(file);

    let mut count = 0;
    for line in reader.lines() {
        let line = line?;
        count += line.split_whitespace().count();

fn main() {
    let file_paths = vec!["file1.txt", "file2.txt", "file3.txt"]; // Replace with actual file paths
    let mut handles = vec![];

    for path in file_paths {
        let path = Path::new(path);
        let handle = thread::spawn(move || {
            match count_words_in_file(path) {
                Ok(count) => println!("{} has {} words.", path.display(), count),
                Err(e) => eprintln!("Error processing file {}: {}", path.display(), e),

    for handle in handles {

We then build and run as before, assuming we have three text files (file1.txt, file2.txt, and file3.txt) in the same directory of our program.

# file2.txt has 45 words.
# file3.txt has 1324 words.
# file1.txt has 93980 words.

JANUARY 12, 2024

I got a Permission denied error when trying to cargo install.

› cargo install cargo-wasm
    Updating index
  Downloaded cargo-wasm v0.4.1
error: failed to download replaced source registry `crates-io`

Caused by:
  failed to create directory `/Users/nono/.cargo/registry/cache/`

Caused by:
  Permission denied (os error 13)

This is how I fixed it.

sudo chown -R $(whoami) /Users/nono/.cargo

I then tried to cargo wasm setup and got this error.

› cargo wasm setup
info: syncing channel updates for 'nightly-aarch64-apple-darwin'
error: could not create temp file /Users/nono/.rustup/tmp/628escjb9fzjn4mu_file: Permission denied (os error 13)
Failed to run rustup. Exit code was: 1

Which, again, was solved by changing the owner of ~/.rustup to myself.

sudo chown -R $(whoami) /Users/nono/.rustup

DECEMBER 22, 2023

In Live 109, I used Apple's MLX for the first time—an array framework for Apple Silicon.

Watch this stream to learn how to create a Python environment for MLX, run MLX code in Python, the role of unified memory in MLX, and generate images with Stable Diffusion on Apple silicon.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next week!


00:15 · Introduction
02:09 · Today
03:55 · Topics
07:18 · Farrago
08:16 · File Organization
10:09 · Descript AI
19:33 · Intro to MLX
22:03 · Installation
23:57 · Python Environment
36:59 · Troubleshooting
45:35 · Break
48:57 · The Issue
50:29 · Python Environment
52:53 · Quick Start
57:39 · Unified Memory
01:11:49 · MLX Samples
01:14:33 · Stable Diffusion with MLX
01:28:16 · Outro

DECEMBER 21, 2023

In Live 108, I talked about my new machine, a 14-inch MacBook Pro with the M3 Max Apple silicon chip, 16 cores, 64GB of unified memory, and 1TB of SSD storage; I shared an overview of my streaming and recording setup, going over how I create markers for my videos; and demoed how to run TypeScript and tsx files with Deno and how to compile programs to executables than can run standalone.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next week!


00:25 · Introduction
02:01 · Today
03:05 · New MacBook Pro M3 Max
11:19 · Streaming Workflow
19:35 · OBS Mask for reMarkable Sharing Screen
36:33 · Streaming Workflow
01:08:39 · Marker Cleanup
01:11:47 · Ideas to Build in 2024
01:14:49 · Deno: Run TypeScript and tsx
01:21:23 · Outro

DECEMBER 20, 2023

I was curious about the point of Deno, a JavaScript and TypeScript runtime with similarities to Node.js. Both are built on the V8 JavaScript engine, but Deno is built in Rust; Node is built using C++.

Deno is built for safety and has TypeScript support—there's no need to transpile files to JavaScript.

Two features I love are that you can execute TypeScript directly and that programs can be compiled into standalone binaries with deno compile. Permissions can be specified (say, using --allow-read) on program execution, or be included in the standalone bundle at compile time.

Here's a sample program that watches a directory for file changes, returning create, modify, and remove events.

// watcher.ts
const watcher = Deno.watchFs("/Users/nono/Desktop/files/");

for await (const event of watcher) {
    console.log(`Change detected:`, event);

This program can be run with deno run.

deno run watcher.ts

The program will request permissions for /Users/nono/Desktop/files, which you can provide at execution time to avoid the prompt during execution.

deno run --allow-read=/Users/nono/Desktop/files watcher.ts

We can compile the program as a standalone executable bundle.

deno compile --output=my-watcher watcher.ts

The binary can be executed and will ask for the same permissions.


Permissions can be included at compilation time as well.

deno compile \
--allow-read=/Users/nono/Desktop/nono \
--output=my-watcher watcher.ts

Now, our executable has the required permissions. Not more, not less.

DECEMBER 18, 2023

It seems like Google Calendar's Appointment Schedule feature is going to bite a fair market share of Calendly. It's free and easy to set up—convenient. It's been around for a few months, but I only noticed it today.

I'm a paid Calendly subscriber, and this makes me wonder if the yearly fee is worth paying. It depends on my usage and whether Google's appointment scheduler is a good replacement.

DECEMBER 14, 2023

  • Log into your Gemini account.
  • Go to AccountEarn Voting Materials.
  • Copy your E-Ballot ID.
  • Access the online balloting platform, which is Genesis' soliciting agent, Kroll.
  • Paste your E-Ballot ID.

You can update your vote multiple times before the Jan 10, 2024 (4 pm) deadline and, as listed in the voting instructions, "the last properly completed Ballot timely submitted will supersede and revoke any previously received Ballot."1

  1. If multiple Ballots are electronically submitted by a single Holder with respect to the same Claim prior to the Voting Deadline, the last properly completed Ballot timely submitted will supersede and revoke any previously received Ballot. 

DECEMBER 13, 2023

I received the following email from Dropbox notifying me that Dropbox for macOS on File Provider was ready.

We’re writing to let you know that Dropbox for macOS on File Provider is ready. This updated version of the Dropbox app has a deep integration with macOS to ensure you have the best Dropbox experience.

To get started, ensure your computer is on the latest version of macOS and click the Dropbox icon in your menu bar. On the notification that appears, click Get started.

File Provider is a macOS API in the form of "an extension other apps use to access files and folders managed by your app and synced with a remote storage."

I'm in macOS Sonoma, and so far, the transition has been smooth. It seems like macOS indexed my entire Dropbox library, which is north of 500,000 files.

Offline file loading on double-click has improved, and it seems third-party apps that read configuration files from Dropbox aren't having any issues doing so.

Other than that, the overall experience feels the same.

This move was forced by Apple and is meant to improve how Dropbox and other cloud file providers work.

Let's see how it goes.

Want to see older publications? Visit the archive.

Listen to Getting Simple .