MARCH 29, 2023

Here's a one-liner to turn any website into dark mode.

body, img { filter: invert(0.92) }

I apply this to selected sites using Stylebot, a Chrome extension that lets you apply custom CSS to specific websites.

In a nutshell, the CSS inverts the entire website and then inverts images again to render them normally. You can adjust the invert filter's amount parameter, which in the example is set to 0.92. 0 would be no color inversion at all. 100 would be full-color inversion; whites turn black, and blacks turn white. I often prefer to stay within 90–95% to reduce the contrast.

MARCH 28, 2023

Transcripts will be commonplace

I began transcribing podcast episodes four years ago. I used, a transcription service that's grown into a product that transcribes meetings, captures slides, and generates summaries. Today, there are several machine-learning-based transcription offerings, some of which are free if you run them on your device.

We're almost at a point where artificial intelligence can transcribe large audio pieces without making mistakes.

I've used Descript1 (paid) heavily over the past few years and explored other alternatives. Descript transcribes and lets you edit audio and video by editing text. Transcriptions are highly accurate, but even a small error rate results in serious editing if you want transcripts of hours-long conversations to be correct, especially when using technical keywords and niche words. These tools let you provide sample text and a glossary of words in our audio to improve their accuracy.2

These workflows are still much better (and faster) than transcribing manually. If you don't have time to fix mistakes, you can often get away by adding a disclaimer that "transcripts were automatically generated and may contain errors."

In September 2022, OpenAI trained and open-sourced a neural network called Whisper that, in their own words, "approaches human level robustness and accuracy on English speech recognition." That's a big step. The community can extend Whisper and use it for free.3 (I've been using Whisper and played it with on Live 98.)

These systems can predict word-level timestamps—making it possible to highlight the exact word spoken at a given time—and perform something called speaker diarization, a fancy way to say that the AI knows who's talking when by identifying the active speaker.

Soon enough, transcripts will be commonplace. Transcripts will be free, automatic, and accurate; we'll expect them to be there.

Indeed, Spotify is already transcribing trending podcasts, and YouTube generates captions for every video. I imagine WhatsApp will transcribe voice notes so you can read them when you can't play them.

Transcripts are helpful to listeners to follow content, browse through long pieces, or refer to particular points of a conversation. But they also provide a way for editors to navigate episodes quickly and get an idea of their content, making it easier to edit by removing or moving blocks of audio around to make a conversation more fluid. They also help editors write episode descriptions, notes, and chapters, and machine learning is starting to do these tasks for us automatically—which is exciting.

Soon enough, we'll hit record, delegate all this manual labor to the machine4, and focus on our next piece of content when done.

  1. Descript is a paid service. Their Pro subscription comes with thirty hours of monthly transcription. 

  2. Descript has a Glossary of words for this purpose, and Whisper's command-line interface accepts a parameter called --initial_prompt to provide text style and uncommon words. 

  3. Whisper is also available as a cloud service that costs half a cent of a dollar for each minute of audio transcribed ($0.006/min). You can browse OpenAI's API Pricing here

  4. OpenAI's GPT-3.5-turbo and GPT-4—the models behind ChatGPT—can perform these tasks. You may ask them to Summarize a text or Extract keywords and topics from a paragraph. OneAI already offers a service to extract relevant keywords and generate text summaries. 

MARCH 27, 2023

After two months of pause, we're preparing to release new Getting Simple podcast episodes.

Editing and publishing add friction and delays to my process, so I'm exploring code and ML workflows to post-process of episodes' audio and generate transcripts, summaries & notes.

I'm not there yet. But OpenAI's Whisper (free) and Descript (paid) already provide accurate transcriptions. Existing projects and companies use #GPT-like language models to extract episode keywords, topics, chapters & summaries.

We'll soon have automatic episode notes.

It's exciting. I think we're getting very, very close. I've also played with Spotify's pedalboard Python package to post-process audio without relying on a Digital Audio Workstation (DAW).

That's cool because I can create reusable scripts for specific recording conditions and forget about audio editing — say, compressing, limiting, applying noise gates, or normalization—things you'd otherwise do in Adobe Audition.

Let me know if you'd like to see these automations in the live stream and video tutorials or shared here on Twitter at @nonoesp.

MARCH 23, 2023

The point of writing as a human is to express ourselves. To pour words on paper (or the screen) and reflect on who you are, to learn, to evolve, and to inspire others. You can influence and inspire your future self as well.

Yesterday, I woke up and started the day writing five hundred words before I did any work. This is a practice I follow and will continue to follow. It doesn't make sense to delegate this to a machine because the whole point is to pour things out of my mind. Maybe this can turn into a conversation with an AI in the long run. I talk, we discuss, and my virtual assistant takes notes and generates a document instead of typing at my desk with a keyboard.

Machine intelligence is here to stay, and we'll find it harder to be original as it improves. But we must remember that they work because of all the knowledge humans have created before, with our mistakes and biases. Only they'll get better if we continue to produce original content. That may be a mistaken assumption, but I believe it in some way. AI originality is probably down the road, and current systems can hallucinate. But I like to think we'll do better work together with them. We must wait until everything is stable to identify which parts won't be done by humans anymore. Maybe they will but at a scary-fast pace.

Writing is a medium for creative expression, as are drawing, singing, film, photography, and many, many other forms. Get a pen and write—express yourself. Type with your fingers or thumbs. Shoot a video. Take a photo. Doodle. Tell us a story.

MARCH 22, 2023

I've installed vnstat on my M1 MacBook Pro with Homebrew to monitor my network usage over time.

# Install vnstat on macOS with Homebrew.
brew install vnstat

Make sure you start the vnstat service with brew for vnstat to monitor your network usage.

brew services start vnstat

vnstat will be running in the background, and you'll have to wait days for it to gather statistics and be able to show you, for instance, the average monthly usage.

› vnstat -m
# gif0: Not enough data available yet.

After a few minutes, you'll see stats on vnstat.

Last 5 minutes

› vnstat -5

# en0  /  5 minute
#         time        rx      |     tx      |    total    |   avg. rate
#     ------------------------+-------------+-------------+---------------
#     2023-03-19
#         12:45    839.44 MiB |    2.60 MiB |  842.04 MiB |   23.55 Mbit/s
#         12:50    226.26 MiB |  306.00 KiB |  226.56 MiB |   46.35 Mbit/s
#     ------------------------+-------------+-------------+---------------


› vnstat -h

# en0  /  hourly
#         hour        rx      |     tx      |    total    |   avg. rate
#     ------------------------+-------------+-------------+---------------
#     2023-03-19
#         12:00      1.04 GiB |    2.90 MiB |    1.04 GiB |   28.10 Mbit/s
#     ------------------------+-------------+-------------+---------------


› vnstat -m

# en0  /  monthly
#        month        rx      |     tx      |    total    |   avg. rate
#     ------------------------+-------------+-------------+---------------
#       2023-03      1.04 GiB |    2.90 MiB |    1.04 GiB |   28.10 Mbit/s
#     ------------------------+-------------+-------------+---------------
#     estimated      3.43 TiB |    9.56 GiB |    3.44 TiB |

You can read the guide to get familiar with the commands.

MARCH 21, 2023

For the sake of doing

Puzzles embody the German word funktionlust.

Doing for the sake of doing.

Not pursuing an outcome. Just doing.

MARCH 20, 2023

Here are my highlights from Works Containing Material Generated by Artificial Intelligence.

One such recent development is the use of sophisticated artificial intelligence (“AI”) technologies capable of producing expressive material.[5] These technologies “train” on vast quantities of preexisting human-authored works and use inferences from that training to generate new content. Some systems operate in response to a user's textual instruction, called a “prompt.” [6] The resulting output may be textual, visual, or audio, and is determined by the AI based on its design and the material it has been trained on. These technologies, often described as “generative AI,” raise questions about whether the material they produce is protected by copyright, whether works consisting of both human-authored and AI-generated material may be registered, and what information should be provided to the Office by applicants seeking to register them.

[I]n 2018 the Office received an application for a visual work that the applicant described as “autonomously created by a computer algorithm running on a machine.” [7] The application was denied because, based on the applicant's representations in the application, the examiner found that the work contained no human authorship. After a series of administrative appeals, the Office's Review Board issued a final determination affirming that the work could not be registered because it was made “without any creative contribution from a human actor.”

In February 2023, the Office concluded that a graphic novel [9] comprised of human-authored text combined with images generated by the AI service Midjourney constituted a copyrightable work, but that the individual images themselves could not be protected by copyright.

In the Office's view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans.

[I]n the current edition of the Compendium, the Office states that “to qualify as a work of `authorship' a work must be created by a human being” and that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”

Individuals who use AI technology in creating a work may claim copyright protection for their own contributions to that work.

Applicants should not list an AI technology or the company that provided it as an author or co-author simply because they used it when creating their work.

MARCH 17, 2023

docker run -it -p HOST_PORT:CONTAINER_PORT your-image

When you run services inside of Docker in specific ports, those are internal ports on the virtual container environment. If you want to connect to those services from your machine, you need to expose ports to the outside world explicitly. In short, you need to map TCP ports in the container to ports on the Docker host, which may be your computer. Here's how to do it.

Let's imagine we have a Next.js app running inside our Docker container.

› docker run -it my-app-image
next dev
# ready - started server on, url: http://localhost:3000

The site is exposed to port 3000 of the container, but we can't access it from our machine at http://localhost:3000. Let's map the port.

› docker run -it -p 1234:3000 my-app-image
next dev
# ready - started server on, url: http://localhost:3000
  • We've mapped TCP port 3000 of the container to port 1234 of the Docker host (our machine)
  • We can now browse the app at http://localhost:1234
  • When your machine loads port 1234, Docker forwards the communication to port 3000 of the container

MARCH 15, 2023

You can upload Shorts to YouTube with the YouTube API as you would upload any other video. Simply ensure your video has an aspect ratio of 9:16 and is less than 60 seconds. YouTube will automatically set it as a Short.

Follow this guide to see how to upload videos to YouTube with the YouTube API.

MARCH 14, 2023

Food and happiness

Hunger, a mollete, Spanish ham, olive oil, and a grill.

It may not be the definition of happiness. But boy is it close.

MARCH 10, 2023

Apple just unlocked new options to price apps: ten-cent steps between $0.10 to $10, fifty cents between $10 and $50, and so on and so forth.

Choose from 900 price points — nearly 10 times the number of price points previously available for paid apps and one-time in-app purchases. These options also offer more flexibility, increasing incrementally across price ranges (for example, every $0.10 up to $10, every $0.50 between $10 and $50, etc.).

MARCH 7, 2023

The broken faucet

As I was on a mission to get our bathroom faucet repaired, I wrote about how broken items are often easier replaced than fixed—sometimes even cheaper.

My thinking applied here, and I ended up replacing the broken faucet.

  • The cost of the broken part, the ceramic cartridge, was 30–60% of the entire faucet.
  • I can install a new faucet, but I don't know how to replace the cartridge.
  • The cost of the part plus the repair service is likely higher than a new faucet.
  • Warranty nor home insurance covers damaged cartridges.

I removed and reinstalled the faucet multiple times, so I knew how to install it.

I drove to Obramat1, paid fifty euros for a new faucet of the same model, and installed it myself.

The leak is gone.

  1. Málaga's Obramat was formerly known as Bricomart. 

FEBRUARY 28, 2023

AI and a text prompt

I came across Pants on Spotify, Lambert's new single released February 24, 2023. On its album cover, a pair of legs with blue jeans pop out of a cloud next to what may be a flying lamb. In the background, a blue sky, clouds, mountains, and greenery.

If anything, the artwork is appealing to the eye.

At first sight, the image reminded me of my experiments to produce art in the style of American painter Edward Hopper with DALL-E, OpenAI's text-to-image artificial intelligence.

Indeed, the corner of Lambert's album cover features a color key with yellow, light blue, green, red, and blue squares that reveals OpenAI's algorithm also generated the image.

No brush strokes, photography, or Photoshop, but AI, a text prompt, and, likely, an entertaining process of trial and error.

FEBRUARY 24, 2023

Here's how to define simple async functions in TypeScript.

(async (/*arguments*/) => {/*function logic*/})(/*values*/); 

No arguments

// Define an asynchronous function.
const helloAsync = async() => { console.log("Hey, Async!"); }

// Call it asynchronously.

With arguments

(async(text: string) => { console.log(text); })("Hello, Async!")

With delay

(async(text: string) => { setTimeout(() => console.log(text), 2000); })("Hello, Async!")

Synchronously inside of an asynchronous function

// Say we have an async talk() function that logs text to the console.
const talk = async(text: string) => { console.log(text); }

// And a sleep() function that uses a Promise to wait for milliseconds.
const sleep = (ms: number) => {
  return new Promise(resolve => setTimeout(resolve, ms));

// We can wrap calls to async functions in an async function.
// Then `await` to execute them synchronously.
(async () => {
  await talk(`Hello!`);
  await sleep(1000);
  await talk(`What's up?`);
  await sleep(2000);
  await talk(`Bye now!`);

FEBRUARY 23, 2023

Here's how to list the commits that happened between two tags.

git log --pretty=oneline 0.8.0...0.9.0

The two tags—in this case, 0.8.0 and 0.9.0—need to exist.

You can list existing tags in a repository as below.

git tag

FEBRUARY 22, 2023

You can list what packages are installed globally in your system with npm -g list—shorthand for npm --global list—whereas you'd list the packages installed in an NPM project with npm list.

Let's see an example of what the command might return.

npm -g list
# /opt/homebrew/lib
# ├── cross-env@7.0.3
# ├── http-server@14.1.1
# ├── node-gyp@9.3.1
# ├── npm@9.5.0
# ├── pm2@5.2.2
# ├── spoof@2.0.4
# ├── ts-node@10.9.1
# └── typescript@4.9.5

FEBRUARY 21, 2023

Easier replaced than fixed

When replacing a broken item is easier (even cheaper) than fixing it, and online shopping is more convenient than going to the store, we are likely to choose to save time, money, and effort, even when it leads to more waste and pollution.

The internet rewards low prices, free shipping, and fast delivery over everything else.

FEBRUARY 17, 2023

Here are some of the commands we used during the Creative Machine Learning Live 97.

First, create an Anaconda environment or install in your Python install with pip.

pip install imaginairy

Before running the commands below, I entered an interactive imaginAIry shell.

🤖🧠> # Commands here
# Upscale an image 4x with Real-ESRGAN.
upscale image.jpg

# Generate an image and animate the diffusion process.
imagine "a sunflower" --gif

# Generate an image and create a GIF comparing it with the original.
imagine "a sunflower" --compare-gif

# Schedule argument values.
edit input.jpg \
    --prompt "a sunflower" \
    --steps 21 \
    --arg-schedule "prompt_strength[6:8:0.5]" \
    --compilation-anim gif

FEBRUARY 15, 2023

Here's how to add NuGet packages from a local source to your Visual Studio project.

  • Create a new project or open an existing one.
  • Create a folder in your computer that will be a "repository" of local NuGet packages. (Let's name it local-nugets).
  • In Visual Studio, go to Tools > Options > NuGet Package Manager > Package Sources.
  • Click the Add button (the green cross) to create a new Package Source.
  • In the bottom inputs, choose a custom name for this new Package Source and then click the three dots (...) to browse and select the folder you previously created -- local-nugets in my case -- and then click on Update.
  • Now include your NuGet package inside your local-nugets folder, and everything left is to install the package as follows.
  • Go to Project > Manage NuGet Packages > Browse.
  • Select your new Package source which should be listed.
  • Click it and select Install.
  • You're all done installing the package. Just add the corresponding headers to your C# file to include the NuGet package in your project.

FEBRUARY 14, 2023

Looking for answers

Google co-founders Larry Page and Sergey Brin almost sold the search engine for one million dollars in 1999.

Today, as Google is valued at $1.35 trillion1, an arms race takes place between Microsoft and Google to dominate online search.

Unless you are an employee or shareholder of either of these companies, you likely don't care much about who wins. Yet this is one of Google's primary revenue sources.

The business model of Google search and Bing (Microsoft's search engine) is to provide results for our web searches, not for free, but in exchange for the consumption of paid ads.

The war is taking place because Microsoft recently announced Bing AI, a new version of their search engine powered by a next-generation OpenAI large language model that is faster, more accurate, and more capable than ChatGPT and has been customized for search2.

In case you’ve been living under a rock, OpenAI released its cutting-edge human language AI model, GPT-3, through an interface intuitive to all of us—text messages—as ChatGPT. This app set a new record for the fastest-growing user base in online applications: 100 million active users in January 202334.

ChatGPT can hold conversations, answer questions, and solve problems.

In this battle, we are mere spectators waiting to see who wins.

We're not searching the internet anymore. We're looking for answers.

  1. Google: My Top Stock For 2023. Seeking Alpha. Accessed February 13, 2023. 

  2. Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web. Microsoft. Accessed February 13, 2023. 

  3. ChatGPT sets record for fastest-growing user base. Reuters. Accessed February 13, 2023. 

  4. As Dylan Patel and Afzal Ahmad point out, ChatGPT surpassed TikTok and Instagram in fastest growth, which achieved 100 million active users in 9 months and 2.5 years, respectively. 

FEBRUARY 13, 2023

Here's how to randomize a list of strings in bash.

On macOS, you can use Terminal or iTerm2.

The shuf command shuffles a list that is "piped" to it.

Shuffling the contents of a directory

An easy way to do that is to list a directory's contents with ls and then shuffle them.

ls ~/Desktop | shuf

Shuffling a list of strings

The easiest way to shuffle a set of strings is to define an array in bash and shuffle it with shuf.

WORDS=('Milk' 'Bread' 'Eggs'); shuf -e ${WORDS[@]}

You can use pbcopy to copy the shuffled list to your clipboard.

WORDS=('Milk' 'Bread' 'Eggs' ); shuf -e ${WORDS[@]} | pbcopy

Shuffling lines from a text file

Another way to randomize a list of strings from bash is to create a text file, in this case named words.txt, with a string value per line.


You can create this file manually or from the command-line with the following command.

echo "Bread\nMilk\nChicken\nTurkey\nEggs" > words.txt

Then, we cat the contents of words.txt and shuffle order of the lines with shuf.

cat words.txt | shuf
# Eggs
# Milk
# Chicken
# Turkey
# Bread

Again, you can save the result to the clipboard with pbcopy.

cat words.txt | shuf | pbcopy

If you found this useful, let me know!

FEBRUARY 10, 2023

Here's a Python class that can track and push metrics to AWS CloudWatch.

Metrics are reset to their initial values on creation and when metrics are uploaded to CloudWatch.

A metrics class ready to track and push metrics to AWS CloudWatch.

from datetime import datetime
import os
import boto3

# CloudWatch metrics namespace.
METRICS_NAMESPACE = 'my_metrics_namespace'

# Duration to wait between metric uploads.

class Metrics:
    Holds metrics, serializes them to CloudWatch format,
    and ingests foreign metric values.

    def __init__(self):

    def reset(self):
        Resets metric values and last upload time.
        self.last_upload_time =
        # Your custom metrics and initial values
        # Note that here we're using 'my_prefix' as
        # a custom prefix in case you want this class
        # to add a prefix namespace to all its metrics.
        self.my_prefix_first_metric = 0
        self.my_prefix_second_metric = 0

    def to_data(self):
        Serializes metrics and their values.
        def to_cloudwatch_format(name, value):
            return {'MetricName': name, 'Value': value}

        result = []
        for name, value in vars(self).items():
            if name != 'last_upload_time':
                result.append(to_cloudwatch_format(name, value))
        return result

    def ingest(self, metrics, prefix=''):
        Adds foreign metric values to this metrics object.
        input_metric_names = [attr for attr in dir(metrics)
                              if not callable(getattr(metrics, attr))
                              and not attr.startswith("__")]

        # Iterate through foreign keys and add metric values.
        for metric_name in input_metric_names:

            # Get value of foreign metric.
            input_metric_value = getattr(metrics, metric_name)

            # Get metric key.
            metric_key = f'{prefix}_{metric_name}'

            # Get metric value.
            metric_value = getattr(self, metric_key)

            # Add foreign values to this metrics object.
              input_metric_value + metric_value

    def upload(self, force=False):
        Uploads metrics to CloudWatch when time since last
        upload is above a duration or when forced.

        # Get time elapsed since last upload.
        seconds_since_last_upload = \
            ( - self.last_upload_time).seconds

        # Only upload if duration is greater than threshold,
        # or when the force flag is set to True.
        if seconds_since_last_upload > 50 or force:
            # Upload metrics to CloudWatch.
            cloudwatch = boto3.client(
            # Reset metrics.

To use this class, we just have to instantiate a metrics object, track some metrics, and upload them.

# Create a metrics object.
metrics = Metrics()

# Add values to its metrics.
metrics.my_prefix_first_metric += 3
metrics.my_prefix_second_metric += 1

# Upload metrics to CloudWatch.

If you were processing metrics at a fast pace, you don't want to upload metrics every single time you increase their value, as otherwise CloudWatch will complain. In certain cases, AWS CloudWatch's limit is 5 transactions per second (TPS) per account or AWS Region. When this limit is reached, you'll receive a RateExceeded throttling error.

By calling metrics.upload(force=False) we only upload once every METRICS_UPLOAD_THRESHOLD_SECONDS. (In this example, at maximum every 50 seconds.)

import time

# Create a metrics object.
metrics = Metrics()

for i in range(0, 100, 1):
    # Wait for illustration purposes,
    # as if we were doing work.

    # Add values to its metrics.
    metrics.my_prefix_first_metric += 3
    metrics.my_prefix_second_metric += 1

    # Only upload if more than the threshold
    # duration has passed since we last uploaded.

# Force-upload metrics to CloudWatch once we're done.

Lastly, here's how to ingest foreign metrics with or without a prefix.

# We define a foreign metrics class.
class OtherMetrics:

    def __init__(self):

    def reset(self):
        # Note that here we don't have 'my_prefix'.
        self.first_metric = 0
        self.second_metric = 0

# We instantiate both metric objects.
metrics = Metrics()
other_metrics = OtherMetrics()

# The foreign metrics track values.
other_metrics.first_metric += 15
other_metrics.second_metric += 3

# Then our main metrics class ingests those metrics.
metrics.ingest(other_metrics, prefix='my_prefix')

# Then our main metrics class has those values.
# Returns 15

# Returns 3

If you found this useful, let me know!

Take a look at other posts about code, Python, and Today I Learned(s).


Here's how to sort a Python dictionary by a key, a property name, of its items. Check this post if you're looking to sort a list of lists instead.

# A list of people
people = [
    {'name': 'Nono', 'age': 32, 'location': 'Spain'},
    {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
    {'name': 'Phillipe', 'age': 100, 'location': 'France'},
    {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},

# Sort people by age, ascending
people_sorted_by_age_asc = sorted(people, key=lambda x: x['age'])
# [
#     {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
#     {'name': 'Nono', 'age': 32, 'location': 'Spain'},
#     {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
#     {'name': 'Phillipe', 'age': 100, 'location': 'France'}
# ]

# Sort people by age, descending
people_sorted_by_age_desc = sorted(people, key=lambda x: -x['age'])
# [
#     {'name': 'Phillipe', 'age': 100, 'location': 'France'},
#     {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
#     {'name': 'Nono', 'age': 32, 'location': 'Spain'},
#     {'name': 'Alice', 'age': 20, 'location': 'Wonderland'}
# ]

# Sort people by name, ascending
people_sorted_by_name_desc = sorted(people, key=lambda x: x['name'])
# [
#     {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
#     {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
#     {'name': 'Nono', 'age': 32, 'location': 'Spain'},
#     {'name': 'Phillipe', 'age': 100, 'location': 'France'}
# ]


You can measure the time elapsed during the execution of Python commands by keeping a reference to the start time and then subtracting the current time at any point on your program from that start time to obtain the duration between two points in time.

from datetime import datetime
import time

# Define the start time.
start =

# Run some code..

# Get the time delta since the start.
elapsed = - start
# datetime.timedelta(seconds=2, microseconds=005088)
# 0:00:02.005088

# Get the seconds since the start.
elapsed_seconds = elapsed.seconds
# 2

Let's create two helper functions to get the current time (i.e. now) and the elapsed time at any moment.

# Returns current time
# (and, if provided, prints the event's name)
def now(eventName = ''):
  if eventName:
    print(f'Started {eventName}..')

# Store current time as `start`
start = now()

# Returns time elapsed since `beginning`
# (and, optionally, prints the duration in seconds)
def elapsed(beginning = start, log = False):
  duration = - beginning;
  if log:
  return duration

With those utility functions defined, we can measure the duration of different events.

# Define time to wait
wait_seconds = 2

# Measure duration (while waiting for 2 seconds)
beginning = now(f'{wait_seconds}-second wait.')

# Wait.

# Get time delta.
elapsed_time = elapsed(beginning, True)
# Prints 0:00:02.004004

# Get seconds.
elapsed_seconds = elapsed_time.seconds
# Prints 2

# Get microseconds.
elapsed_microseconds = elapsed_time.microseconds
# Prints 4004

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, Python, React, and TypeScript.

FEBRUARY 7, 2023

No time left

When time runs out, you're left with two choices: give up or finish.

There's no time left to polish or overthink.

And it turns out we often squeeze ourselves to come up with something instead of giving up.

It'll be over soon, you think.

Forcing yourself into no-time-left mode—by establishing a rigid work schedule, for instance—is a great strategy to combat the trough1 and be efficient.

  1. Daniel Pink uses the term trough in his book When: The Scientific Secrets of Perfect Timing to refer to the times between beginnings and ends of the day or a work session, in which it's easy for productivity to lower due to a feeling of time slack. 


Here's how to sort a Python list by a key of its items. Check this post if you're looking to sort a list of dictionaries instead.

# A list of people
# name, age, location
people = [
    ['Nono', 32, 'Spain'],
    ['Alice', 20, 'Wonderland'],
    ['Phillipe', 100, 'France'],
    ['Jack', 45, 'Caribbean'],

# Sort people by age, ascending
people_sorted_by_age_asc = sorted(people, key=lambda x: x[1])
# [
#     ['Alice', 20, 'Wonderland'],
#     ['Nono', 32, 'Spain'],
#     ['Jack', 45, 'Caribbean'],
#     ['Phillipe', 100, 'France']
# ]

# Sort people by age, descending
people_sorted_by_age_desc = sorted(people, key=lambda x: -x[1])
# [
#     ['Phillipe', 100, 'France'],
#     ['Jack', 45, 'Caribbean'],
#     ['Nono', 32, 'Spain'],
#     ['Alice', 20, 'Wonderland']
# ]

# Sort people by name, ascending
people_sorted_by_name_desc = sorted(people, key=lambda x: x[0])
# [
#     ['Alice', 20, 'Wonderland'],
#     ['Jack', 45, 'Caribbean'],
#     ['Nono', 32, 'Spain'],
#     ['Phillipe', 100, 'France']
# ]


Here's how to read contents from a comma-separated value (CSV) file in Python; maybe a CSV that already exists or a CSV you saved from Python.

Read CSV and print rows

import csv

csv_file_path = 'file.csv'

with open(csv_file_path, encoding='utf-8') as csv_file:
    csv_reader = csv.reader(csv_file, delimiter=',')

    # Print the first five rows
    for row in list(csv_reader)[:5]:

    # Print all rows
    for row in list(csv_reader)[:5]:

FEBRUARY 2, 2023

Here's how to generate pseudo-random numbers in Python.

import random

# Random generation seed for reproducible results
seed = 42

# Float
# 7.475987589205186

# Integer
# 7

# Integer
random.Random(seed).randint(0, 999)
# 654

See the random module for more information.

FEBRUARY 1, 2023

Here's how to pass arguments to a Dockerfile when building a custom image with Docker.

First, you need to define a Dockerfile which uses an argument.

# Dockerfile
FROM python

ARG code_dir # Our argument

WORKDIR /code/
ENTRYPOINT ["python", "/code/"]

COPY ./$code_dir /code/
RUN pip install -r requirements.txt

What the above Dockerfile does is parametrize the location of the directory of, our Docker image's entry point. For this example's sake, let's assume our directory structure looks like the following.

# code_a/
print('This is code_a!')
# code_b/
print('This is code_b!')

Then you'll pass the code_dir variable as an argument to docker build to decide whether the Dockerfile is going to COPY folder code_a or code_b into our image.

Let's pass code_a as our code_dir first.

docker build -t my_image_a --build-arg code_dir=code_a .
docker run -it my_image_a
# Prints 'This is code_a!'

Then code_b.

docker build -t my_image_b --build-arg code_dir=code_b .
docker run -it my_image_b
# Prints 'This is code_b!'

The objective of this example was to avoid having two different Dockerfiles that look exactly the same but simply specify different source code paths. We could have done the same with the following two Dockerfiles and specifying which Docker file to use in each case with the -f flag.

# Dockerfile.code_a
FROM python

WORKDIR /code/
ENTRYPOINT ["python", "/code/"]

COPY ./code_a /code/
RUN pip install -r requirements.txt
# Dockerfile.code_b
FROM python

WORKDIR /code/
ENTRYPOINT ["python", "/code/"]

COPY ./code_b /code/
RUN pip install -r requirements.txt
docker build -t my_image_a -f Dockerfile.code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
docker build -t my_image_b --f Dockerfile.code_b .
docker run -it my_image_b
# Prints 'This is code_b!'

If you found this useful, let me know!

JANUARY 31, 2023

Blogging daily in 2022

In 2022, I tried blogging every day, publishing 330 out of 365 days.

Many of my publications were shallow updates and technical posts.

But the experiment made me write more.

Yet, as Derek Sivers points out, sometimes you "spend more time being shallow to get something posted," which often happens to me when writing my Tuesday posts.

Ideally, I'd spend many hours writing every week, so I can at least get one essay or mini-essay on the blog. But it's not always easy.

My plan for 2023 is to continue publishing this newsletter weekly, use it to develop ideas for more extensive essays which get broken down into individual concepts, then publish longer writings when they are worth sharing on Substack.

The gist of writing is to entertain the reader.

Want to see older publications? Visit the archive.

Listen to Getting Simple .