NOVEMBER 30, 2022

A new Getting Simple episode with Zach Kron is in the making and will be released soon, including a full video with two camera feeds.

NOVEMBER 29, 2022

Organic content and keyword bidding

If you Google Where are voice memos located on Mac? you will likely find my website among the first results.

There are two ways to show up first on Google: with "organic" non-paid content and with paid ads.

Google "organically" indexes publicly available websites and displays search results by relevance, which can get you to Google's top results without paying for an ad.

Search Engine Optimization (SEO) is the field of optimizing websites to perform well on search engines.

Positioning strategies may involve the creation of content related to your brand and services to answer questions frequently Googled with common keywords used in search queries. A page's title, its web address, its code, and its uniqueness matter—it's not only about the content.

For instance, I gave an answer to the question "What is SEO?" above, which could generate organic views from Google to this page, but it's unlikely, as there are probably hundreds of better-positioned sites that answer that question in more depth.

Google's AI often goes as far as identifying which portion of your writing exactly answers a user's prompt, displaying that portion of text on its search engine, and saving the user a visit to your website as they already got what they wanted.

The shortcut to Google's top hits is keyword bidding, where you compete with others interested in displaying an ad for a similar search query.

It wouldn't make sense, though, to bid for "location of voice memos on Mac" as I'm simply offering information and not marketing products or services for monetization.

Google's algorithm—as well as YouTube's, Instagram's, and TikTok's—is a world on its own, which can bring your content to thousands or millions if used properly.

NOVEMBER 28, 2022

Creating grids with native HTML5 and the "display: flex" CSS property.

See transcript ›

NOVEMBER 27, 2022

I've been testing imaginAIry over the past few days to generate images using Stable Diffusion locally with the Apple Silicon M1 chip.

I see these implementations as a great move, as they require "a CUDA-supported graphics card or M1 processor" and can be run on any of the new MacBooks.

NOVEMBER 25, 2022

In Live 90, we connected via SSH to a Raspberry Pi and took some photos with the Pi Camera, and then trained YOLOv7 on a dataset of hand sketches and detected drawings, text, and arrows from several pages of one of my sketchbooks.

You can spread the word by liking and sharing this tweet.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next week!


NOVEMBER 24, 2022

In Live 89, we saw an overview of TensorFlow Signatures and did a hands-on demo to implement them as well as to understand Python decorators.

You can spread the word by liking and sharing this tweet.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next week!


NOVEMBER 22, 2022

How long will it take?

A thirty-minute run takes thirty minutes.

You can watch a two-hour movie in two hours and listen to a twenty-minute podcast in twenty minutes.

But how long will it take you to write a two-hundred-word essay?

Somewhere from a few minutes to several hours.

It's hard to say because creative work doesn't fit into discrete units of time.

Writing, for instance, requires the right head space and time.

It may take several hours to write a piece in ten minutes.

NOVEMBER 21, 2022

How to encode an image dataset to reduce its dimensionality and visualize it in the 2D space.

See transcript ›

NOVEMBER 20, 2022

I started the Getting Simple podcast with the AT2020USBi microphone. I upgraded to the Shure SM58 and the Zoom H6 recorder after some time, which greatly improved the quality. And then I started using the Shure SM7B (with the CL-1 CloudLifter) and the Zoom Podtrak P4 recorder. I still carry with me the Shure SM58 when traveling as these are less bulky and more resistant.

NOVEMBER 19, 2022

My iPhone X screen has been malfunctioning for several months. It would swipe, tap, and type without any understandable cause. I went ahead and decided to get someone to fix it. They changed the screen but the replacement screen wasn't working properly; touches would do nothing even when everything was rendering. The repair service guy, with my phone half open, told me he had to put my phone back as it was, even taking out the new battery he had just installed and putting back in my old one. The result was that now my old screen is working normally, which may mean the screen wasn't broken but misplaced. It may have shifted slightly due to the phone falling and simply disassembling the phone and putting it back together may have fixed those issues.

The swipes and taps would drain the battery as this would happen even when the phone was locked. I'm not sure if you can imagine how bad the experience of using this phone was. What kept me using this phone was that this wasn't a persistent error but an issue that would only express itself at random. I was never able to figure out what was causing the issue.

I'm supposed to go back to replace my battery and screen when a new replacement piece is available at the store. But now I think I'm good with my iPhone as it is.

NOVEMBER 18, 2022

Here's a way to encode a Laravel site request as JSON to log it via Laravel's logging mechanism, using the Log class from the illuminate/support package1.

// Log parameters in a get request
Route::get('a-view', function(Request $request) {
  return view('your.view');

// Log parameters in a get request and redirect
Route::get('redirect', function(Request $request) {
  return redirect('/some/page');

  1. The service provider of Laravel's Log class is Illuminate\Support\Facades\Log

NOVEMBER 17, 2022

"Oops!, something went wrong."

Yesterday, I came across an error while trying to ScreenShare with my reMarkable 2 tablet.

The device continuously lost connection to the screen-sharing session.

I restarted the reMarkable desktop app as well as my computer, but the error persisted.

I went to the app and logged out from my reMarkable account and, when I tried to log back in, I found the following error while trying to get a one-time code from reMarkable's website.

There could be a misconfiguration in the system or a service outage. We track these errors automatically, but if the problem persists feel free to contact us. Please try again.

I think the only error going on was that I clicked the "Get a one-time code" button three times in a row, invalidating so the former authorization tokens of my requests.

I let it sit for a bit and then click once more on the button. I could now log in, obtain an access code, and log the desktop app back into my account.

But the ScreenShare error persisted.


After a few minutes, the reMarkable 2 tablet displayed a message in its bottom-right corner saying an update had been downloaded and was ready to be installed.

I went ahead, updated the device, and screen-shared once again.

The issue was solved. I could screen share.

NOVEMBER 16, 2022

Here's how to translate 3d points in Python using a translation matrix.

To translate a series of points in three dimensions in Cartesian space (x, y, z) you first need to "homogenize" the points by adding a value to their projective dimension—which we'll set to one to maintain the point's original coordinates, and then multiply our point cloud using NumPy's np.matmul method by a transformation matrix constructed from a (4, 4) identity matrix with three translation parameters in its bottom row (tx, ty, tz).


Here's a breakdown of the steps.

  • Import the NumPy Python library
  • Define a point cloud with Cartesian coordinates (x, y, z)
  • Convert the points to homogeneous coordinates (x, y, z, w)
  • Define our translation parameters (tx, ty, tz)
  • Construct the translation matrix
  • Multiply the homogenized point cloud by the transformation matrix with NumPy's np.matmul


import numpy as np

# Define a set of Cartesian (x, y, z) points
point_cloud = [
    [0, 0, 0],
    [1, 0, 0],
    [0, 1, 0],
    [0, 0, 1],
    [1, 1, 1],
    [1, 2, 3],

# Convert to homogeneous coordinates
point_cloud_homogeneous = []
for point in point_cloud:
    point_homogeneous = point.copy()

# Define the translation
tx = 2
ty = 10
tz = 100

# Construct the translation matrix
translation_matrix = [
    [1, 0, 0, 0],
    [0, 1, 0, 0],
    [0, 0, 1, 0],
    [tx, ty, tz, 1],

# Apply the transformation to our point cloud
translated_points = np.matmul(

# Convert to cartesian coordinates
translated_points_xyz = []
for point in translated_points:
    point = np.array(point[:-1])

# Map original to translated point coordinates
# (x0, y0, z0) → (x1, y1, z1)
for i in range(len(point_cloud)):
    point = point_cloud[i]
    translated_point = translated_points_xyz[i]
    print(f'{point} → {list(translated_point)}')

NOVEMBER 15, 2022

Tools: 05 Micron fiber pen

We spent the weekend away from the city—olive trees, sun, blue sky, family, a fireplace—and I brought my sketching tools.

Pigma Micron fiber pens have been my go-to for quite a long time, together with the hardbound, 150–gram, white-paper Alpha Series (22.9 x 15.2 cm, 9 x 6 in) sketchbook from Stillman & Birn, slightly bigger than A5 sheets, and the White Nights watercolors.

The thin tip of 005 Microns provides a 0.20 mm line thickness that allows for careful detail and line work, and thicker 03 Microns (0.35 mm) work great for infills and outlines.

This weekend I used 05 pens with an even thicker 0.45-mm line to portray faces, which resulted in faster sketches with fewer strokes that felt more expressive. Plus they run better on coarse paper.

It pays off to gain control of your creative medium, settle on a fixed set of tools, and focus on the act of doing.

It's only then that slight tool changes like this one can lead to significantly different results.

NOVEMBER 12, 2022

If you try to serialize a NumPy array to JSON in Python, you'll get the error below.

TypeError: Object of type ndarray is not JSON serializable

Luckily, NumPy has a built-in method to convert one- or multi-dimensional arrays to lists, which are in turn JSON serializable.

import numpy as np
import json

# Define your NumPy array
arr = np.array([[100,200],[300,400]])

# Convert the array to list
arr_as_list = arr.tolist()

# Serialize as JSON
# '[[100, 200], [300, 400]]'

NOVEMBER 11, 2022

Here's the error I was getting when trying to return a NumPy ndarray in the response body of an AWS Lambda function.

Object of type ndarray is not JSON serializable

Reproduce the error

import numpy as np
import json

# A NumPy array
arr = np.array([[1,2,3],[4,5,6]])

# Serialize the array
# TypeError: Object of type ndarray is not JSON serializable


NumPy arrays provide a built-in method to convert them to lists called .tolist().

import numpy as np
import json

# A NumPy array
arr = np.array([[1,2,3],[4,5,6.78]])

# Convert the NumPy array to a list
arr_as_list = arr.tolist()

# Serialize the list

NOVEMBER 10, 2022

Earlier this week, Amazon AWS announced yet another service release, this time called Resource Explorer.

AWS Resource Explorer [is] a managed capability that simplifies the search and discovery of resources, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Kinesis streams, and Amazon DynamoDB tables, across AWS Regions in your AWS account. AWS Resource Explorer is available at no additional charge to you.

Start your resource search in the AWS Resource Explorer console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the unified search bar from wherever you are in the AWS Management Console. From the search results displayed in the console, you can go to your resource’s service console and Region with a single step and take action.

To turn on AWS Resource Explorer, see the AWS Resource Explorer console. Read about getting started in our AWS Resource Explorer documentation, or explore the AWS Resource Explorer product page.

NOVEMBER 9, 2022

How to use TensorFlow inside of a Docker container.

See transcript ›

NOVEMBER 8, 2022

Four ways to see the world differently

Change places.
Do something you never do.
Hang out with different people.
Look from a new perspective.

NOVEMBER 7, 2022

How to sort a Vue.js view by different attributes and toggle different view modes.

See transcript ›

NOVEMBER 6, 2022

Today I read What to Blog About Today I Learneds (TILs) by Simon Willison, whose practice was inspired by Josh Branchaud.

Simon Willison read Josh Branchaud's five-year-and-counting collection. Braunchaud stores his collection as a GitHub repository which, at the moment of this writing, has 10.8k stars.

NOVEMBER 5, 2022

You can now integrate state-of-the-art image generation capabilities directly into your apps and products through our new DALL·E API. You can get started here.

You own the generations you create with DALL·E.

We’ve simplified our Terms of Use and you now have full ownership rights to the images you create with DALL·E — in addition to the usage rights you’ve already had to use and monetize your creations however you’d like. This update is possible due to improvements to our safety systems which minimize the ability to generate content that violates our content policy.

Sort and showcase with collections.

You can now organize your DALL·E creations in multiple collections. Share them publicly or keep them private. Check out our sea otter collection!

We’re constantly amazed by the innovative ways you use DALL·E and love seeing your creations out in the world. Artists who would like their work to be shared on our Instagram can request to be featured using Instagram’s collab tool. DM us there to show off how you’re using the API!

—The OpenAI Team

Three methods for interacting with images

DALL-E’s Images API provides three methods for interacting with images.

  1. Creating images from scratch based on a text prompt
  2. Creating edits of an existing image based on a new text prompt
  3. Creating variations of an existing image

The guide covers the basics of using these three API endpoints with useful code samples.

To see them in action, check the DALL·E preview app.

NOVEMBER 4, 2022

In Live 86, we continued training a decision forest algorithm to classify penguins by species with the tensorflow_decision_forests Python framework on the Palmer Penguins dataset, we saw how to run TensorFlow inside of Docker and a few tricks to create containers and manage them, and briefly looked at TensorFlow signatures, which are supported by TensorFlow Lite since version 2.7.0 and let us export different named operations in a single model that can be executed in C++, Java, and Python.

Here are links to most of the things we covered.

You can spread the word by liking and sharing this tweet.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next week!


NOVEMBER 3, 2022

Here's how to run TensorFlow inside of a Docker container.

# Start a Docker container
# with an interactive bash session
docker run -it python:3.9-slim bash

# Install TensorFlow and TensorFlow I/O
pip install tensorflow tensorflow-io

# Run TensorFlow in Python
python -c "import tensorflow as tf;\
print(tf.constant(42) / 2 + 2);\
# tf.Tensor(23.0, shape=(), dtype=float64)
# tf.Tensor([1 2 3], shape=(3,), dtype=int32)

This approach is useful when you don't want to install TensorFlow locally or create a Python environment, or simply when it's hard or not possible to install TensorFlow in your local runtime.

Docker makes it quick to execute TensorFlow commands in Python on any machine running Docker.

If you don't need an interactive session and can define your Python code directly, you can use this one-liner.

docker run -it python:3.9-slim \
bash "-c" "pip install tensorflow tensorflow-io;\ 
python -c 'import tensorflow as tf; \
print(tf.constant(42) / 2 + 2); \
# tf.Tensor(23.0, shape=(), dtype=float64)
# tf.Tensor([1 2 3], shape=(3,), dtype=int32)

NOVEMBER 2, 2022

You can get tomorrow's date in TypeScript with the Date class.

// Create a date
const tomorrow = new Date()

// Set date to current date plus 1 day
tomorrow.setDate(tomorrow.getDate() + 1)
// 2022-11-03T09:55:29.395Z

You could change that + 1 to the time delta you want to go backward or into the future.

// Create a date for Jan 2, 2020
const aDate = new Date(Date.parse("2020-01-02"))

// Go back in time three days
aDate.setDate(aDate.getDate() - 3)
new Date(aDate)
// 2019-12-30T00:00:00.000Z

// Go back in time three days
aDate.setDate(aDate.getDate() - 3)
new Date(aDate)
// 2019-12-27T00:00:00.000Z

// Go forward in time forty days
aDate.setDate(aDate.getDate() + 40)
new Date(aDate)

NOVEMBER 1, 2022

Eight podcasting tips

I recently had a conversation with Steve—who wants to build a YouTube channel about the joy of making and listening to music, emphasizing health and well-being—where I shared tips on producing a podcast, building an audience, booking guests, content formats, motivation, goals, and other insights from five years of podcasting.

This episode may be helpful if you're thinking of starting a podcast or YouTube channel or if you want to learn about my podcasting workflow.

Here are the key ideas.

  • Long-form conversations allow getting to know more about guests
  • Building two-way relationships with your audience makes them more likely to stick around
  • Outlines help deliver a clear message
  • Lean recording and publishing workflows make it easier to start
  • Experimentation can keep you motivated
  • Monetization is hard
  • Define your goal and let it evolve with you
  • Evergreen content will always stay current

You can watch this episode and read the notes.

OCTOBER 31, 2022

How to hide un-compiled Vue templates while loading.

See transcript ›

Want to see older publications? Visit the archive.

Listen to Getting Simple .