Here's how to read contents from a comma-separated value (CSV) file in Python; maybe a CSV that already exists or a CSV you saved from Python.
import csv
csv_file_path = 'file.csv'
with open(csv_file_path, encoding='utf-8') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
# Print the first five rows
for row in list(csv_reader)[:5]:
print(row)
# Print all rows
for row in list(csv_reader)[:5]:
print(row)
Here's how to generate pseudo-random numbers in Python.
import random
# Random generation seed for reproducible results
seed = 42
# Float
random.Random(seed).uniform(3,10)
# 7.475987589205186
# Integer
int(random.Random(seed).uniform(3,10))
# 7
# Integer
random.Random(seed).randint(0, 999)
# 654
See the random module for more information.
Here's how to pass arguments to a Dockerfile
when building a custom image with Docker.
First, you need to define a Dockerfile
which uses an argument.
# Dockerfile
FROM python
ARG code_dir # Our argument
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./$code_dir /code/
RUN pip install -r requirements.txt
What the above Dockerfile
does is parametrize the location of the directory of script.py
, our Docker image's entry point.
For this example's sake, let's assume our directory structure looks like the following.
project/
Dockerfile
code_a/script.py
code_b/script.py
# code_a/script.py
print('This is code_a!')
# code_b/script.py
print('This is code_b!')
Then you'll pass the code_dir
variable as an argument to docker build
to decide whether the Dockerfile
is going to COPY
folder code_a
or code_b
into our image.
Let's pass code_a
as our code_dir
first.
docker build -t my_image_a --build-arg code_dir=code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
Then code_b
.
docker build -t my_image_b --build-arg code_dir=code_b .
docker run -it my_image_b
# Prints 'This is code_b!'
The objective of this example was to avoid having two different Dockerfiles that look exactly the same but simply specify different source code paths.
We could have done the same with the following two Dockerfiles and specifying which Docker file to use in each case with the -f
flag.
# Dockerfile.code_a
FROM python
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./code_a /code/
RUN pip install -r requirements.txt
# Dockerfile.code_b
FROM python
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./code_b /code/
RUN pip install -r requirements.txt
docker build -t my_image_a -f Dockerfile.code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
docker build -t my_image_b --f Dockerfile.code_b .
docker run -it my_image_b
# Prints 'This is code_b!'
If you found this useful, let me know!
In 2022, I tried blogging every day, publishing 330 out of 365 days.
Many of my publications were shallow updates and technical posts.
But the experiment made me write more.
Yet, as Sivers points out, sometimes you "spend more time being shallow to get something posted," which often happens to me when writing my Tuesday posts.
Ideally, I'd spend many hours writing every week, so I can at least get one essay or mini-essay on the blog. But it's not always easy.
My plan for 2023 is to continue publishing this newsletter weekly, use it to develop ideas for more extensive essays which get broken down into individual concepts, then publish longer writings when they are worth sharing on Substack.
The gist of writing is to entertain the reader.
The Google Research team has published a paper for MusicLM, a machine learning model that generates high-fidelity music from text prompts, and it works extremely well. But they won't release it to the public, at least not yet.
You can browse and play through the examples to listen to results obtained by the research team for a wide variety of text-to-music tasks, including audio generation from rich captions, long generation, story mode, text and melody conditioning, painting caption conditioning, 10s audio generation from text, and generation diversity,
I'm particularly surprised by the text and melody conditioning examples, where a text prompt—say, "piano solo," "string quarter," or "tribal drums"—can be combined with a melody prompt—say "bella ciao - humming"—generating accurate results.
Even when they don't release the model, Google Research has publicly released MusicCaps to support future research, "a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts."
Leading zeros are extra zeros to the left of a number when you want to have a regular amount of digits in a set of numbers.
For instance, 0001
, 0002
, and 0003
is a good formatting if you think you'll get to have thousands of entries, as you can stay at four digits up to 9999
.
# Define your number
number = 1
two_digits = f'{number:02d}'
# 01
four_digits = f'{number:04d}'
# 0001
We use the Python formatting helper {my_number:04d}
to enforce a minimum set of digits in our number variable.
This means you can use it to set the value of a string or to create or print a longer string with that number, not necessarily having to store its value.
a_number = 42
print(f'The number is {a_number:06d}.')
# The number is 000042.
print(f'The number is {512:06d}.')
# The number is 000512.
The e
flag/option of pip
"installs a project in editable mode (i.e. setuptools “develop mode”) from a local project path or a VCS url."
pip install -e .
-e, --editable <path/url>
As described in the expanded command flag, -e
stands for editable.
This guide is for macOS Ventura. Check this page for macOS Monterey1.
As is port 3000, port 5000 is commonly used to serve local development servers. When updating to the latest macOS operating system, I noticed my React development server, which I'm serving with React create-serve
was using a port other than 5000, because it was already in use. (You may find a message along the lines of Port 5000 already in use
.)
By running lsof -i :5000
, I found out the process using the port was named ControlCenter
, which is a native macOS application. If this happens to you, even if you use brute force (and kill) the application, it will restart itself. In my laptop, lsof -i :5000
returns that Control Center is being used by process id 433
. I could do killall -p 433
, but macOS keeps restarting the process.
The process running on this port turns out to be an AirPlay server. You can deactivate it in System Settings › General › AirDrop & Handoff and uncheck AirPlay Receiver
to release port 5000
.
As an aside, I just ran into this same issue when trying to run a Node.js server application as of September 13, 2022.
uncaught exception: listen EADDRINUSE: address already in use :::5000
Error: listen EADDRINUSE: address already in use :::5000
If you found this useful, let me know!
Writing every day is easy.
Publishing daily is harder.
These turn out to be two significantly different tasks.
The former consists of pouring words in private every day.
The latter requires getting to a finished piece every single day. You can build a strategy to plan and schedule what will show up in your blog each day in advance, or you can spend minutes to hours every day to get something out the door.
As Derek Sivers says, "writing daily but posting when ready" is better.
Today I learned you can use the plus (+) operator to concatenate or extend lists in Python.
Say you have two lists.
list_a = [1, 2, 3]
list_b = ['Nono', 'MA']
And that you want to create a continuous list with the contents of both, which would look something like [1, 2, 3, 'Nono', 'MA']
.
You can simple add both lists to obtain that result.
>>> combined_list = [1, 2, 3] + ['Nono', 'MA']
>>> combined_list
[1, 2, 3, 'Nono', 'MA']
Of course, it doesn't too much sense in this example because we're explicitly defining the lists and we could define a combined list directly.
combined_list = [1, 2, 3, 'Nono', 'MA']
But it can be useful when we actually need to add lists, for instance to concatenate the results of glob
file listing operations.
>>> from glob import glob
>>> files_a = glob('a/*')
>>> files_a
['a/file.txt', 'a/image.jpg']
>>> files_b = glob('b/*')
>>> files_b
['b/data.json', 'b/profile.jpeg']
>>> all_files = files_a + files_b
>>> all_files
['a/file.txt', 'a/image.jpg', 'b/data.json', 'b/profile.jpeg']
Apple's new Mac Mini and MacBook Pro models have been upgraded. The Mac Mini now ships with the M2 and M2 Pro chips, while 14-inch and 16-inch MacBook Pros can be configured with the M2 Pro and M2 Max. As can be expected, these new silicon chips are faster and more efficient than their M1 predecessors.
Yet I've been using an M1 Max MacBook Pro and the performance is astonishing, and even more now that most development tools and consumer software is compatible with Apple Silicon. Few programs require Rosetta 2 to run and can run natively.
The fact that M2 Pro now ships on Mac Minis makes them a great choice for high-end workflows. It's to be expected that the Mac Studio would also be upgraded with a version of the M2 Ultra whenever that is released.
Here's a list of things that may be slowing you down.
Distractions.
A lack of focus.
Bad work habits.
Little planning.
Fragmented work sessions.
Too many goals.
Too few goals.
Unclear goals.
Weak systems.
Constant context switching.
Lack of a proper workspace.
Working solo.
I liked a quote from Thoreau in Walden, which I highlighted from Cal Newport's Digital Minimalism. “The cost of a thing is the amount of what I will call life which is required to be exchanged for it, immediately or in the long run.”
The beginning of the year is a great temporal landmark to establish New Year's resolutions, a time to adopt atomic habits and hone passion projects.
In my case, I'll stop overdoing things and work to ship better content faster, reduce creative friction by streamlining and automating workflows where possible, and delegate work to other creatives.
It's great to improve and ideate new ways of doing things, but we shouldn't forget to enjoy established workflows once you've reached a good-enough point.
What are your goals for 2023?
As I mentioned last week, I published 52 stories and sketches in 2022.
Here are the ones with the most views (in descending order).
What stories did you enjoy the most?
It's that time of the year again.
The start of a new year is an important temporal landmark to establish New Year's resolutions, a great time to adopt atomic habits and hone passion projects.
I'll continue to stop overdoing things and ship better content faster, to reduce creative friction by streamlining and automating workflows where possible, and to delegate work to other creatives.
I will always continue to improve and ideate new ways of doing things, but it's good to enjoy established workflows once you reach a good-enough point.
Iterating through a list.
for i in 1 2 3
do
echo $i
done
# 1
# 2
# 3
Iterating through a list generated with a sequence.
for i in $(seq 1 2 10)
do
echo $i
done
# 1
# 3
# 5
# 7
# 9
seq 1 2 10
creates a list of numbers from 1
to 10
in steps of 2
.
Hi, Friends.
Here's my last publication of the year.1
I wanted to thank you (!) for being there and share a few highlights from 2022.
At the turn of 2019, I published Twelve Grapes—a short reflection on temporal landmarks and New Year's resolutions. Over the past years, my goal has been to stop overdoing things and ship better content faster, to reduce creative friction by streamlining and automating workflows where possible, and to delegate work to other creatives. I will always continue to improve and ideate new ways of doing things, but it's good to enjoy established workflows once you reach a good-enough point. (As Zach Kron and I discussed in a recent podcast conversation, it's good to stick to a set of tools when you reach control of the creative medium.) I'm pretty happy with where I got regarding live streaming, recording video tutorials, and podcasting (both remotely and in person).
A great acquisition was Elgato's Stream Deck XL. I've set up plenty of actions that significantly improve my streaming workflows, such as setting up my studio lights for recording, creating chapter markers, switching camera scenes, or configuring my computer and opening software required to record.
One of the goals to remove friction from my creative writing process during 2022 was to publish on Substack, in English without a required illustration. I did this on my blog instead2, posting almost daily. And even though I didn't get to establish a clear newsletter strategy or publish every day, I published a lot more content than in 2021.
If you're thinking of starting a blog or a personal journal, I invite you to read One Word per Day. I agree with Seth Godin that "everyone should have a blog and write daily, in public." It's free and a great way to establish your tone before anyone cares about your writing.
I mentioned I wrote 120,000 words in 2022. As I said in Idea Generation Versus Execution, "I write daily in my diary without a specific goal in mind other than capturing memories, facts, and thoughts. Then, I conduct review sessions to highlight passages worth developing further and deep writing sessions where I focus on turning drafts into publishable essays." Much of what I write is for myself. But you'd be surprised at how many ideas that initially seem to go nowhere I end up using in my publications.
I want to highlight two things from the podcast. In 2022, I read three books and had a podcast conversation with their authors—Formulations by Andrew Witt, Native Places: Drawing as a Way to See by Frank Harmon, and Systems Upgrade by Leire Asensio Villoria and David Mah—and that I recorded five in-person conversations on camera, one of them already out with Zach Kron, and others with Ian Keough, Andy Payne, John Pierson, and Alex O'Connor coming out soon.
There are lots more that I'd like to share with you.
But I'll leave that for future posts.
If there's anything you've learned or enjoyed from my sketches and stories of 2022, I'd love for you to send me a note. Feel free to reply to this email if you're getting the newsletter, tweet @nonoesp, add a comment to this publication, or message me on Instagram or via this contact form.
Thanks so much for being there!
Goodbye, 2022.
Happy New Year!
This is the third time I write a year-goodbye post. You can read previous ones from 2020 and 2021. ↩
You can find all my publications from 2022 in my blog's archive. ↩
Today, I spent a couple hours playing with the Alfa Duo, and I've been able to successfully embroider custom vector drawings.
Here are the steps of my current workflow, still in beta.
.ai
for IllustratorPES
for embroideryPES
to my iPhone via Airdrop
I've been tinkering with Processing 4 and PEmbroider to embroider a few tests with the Alfa Duo.
I'll share some of my experiments, but I want to make sure I have things worth showing and I have a few tests working.
Back in December 2020, I used the JEF format to save embroidery files and they were working in the Alfa Duo machine. (Here's a video of those experiments.) Actually, the test files I saved back then still work with this machine. But the latest version of PEmbroider generates JEF files that make the machine crash. Luckily, I've been able to make my files work with the PES file format.
If you'd be interested in seeing any of this in the live stream and YouTube videos, please let me know on Twitter @nonoesp or message me on the Discord community!
Preparing to perform an activity—say, setting up for recording or taking your sketching tools out—adds friction to the creative process, as do post-processing and publishing whatever you make, like editing footage, digitizing drawings, and sharing your creations on social media.
In film-making, shooting is often perceived as the most important part of the process. However, what happens before and after recording is equally important. Planning ensures you’ll shoot the essential takes and post-processing filters out unnecessary takes and shapes the final movie. There's creativity in recording but also in planning and editing.
What's key, not only in film-making but in any creative activity, is to identify which parts simply add friction and which ones are fundamental parts of your creative process.
Here's a comment to Emacs: ffmpeg and macOS aliasing commands on Hacker News.
I use Typinator expansions with the input video path (from the clipboard) as a variable to perform common ffmpeg operations, such as downsize, speed up, extract audio, strip audio, export frames to create GIF animations, or encoding as MP4. Typinator lets you use what's on the clipboard but also add scripting (say, Python or JavaScript) when certain parameters need to be calculated, like sizes or names.
Here's a comment to When Paper Beats the Paper on Hacker News.
The digital medium often strips human expression when capturing input.
Devices such as the Apple Pencil are getting better at capturing pressure and highly-detailed paths, which in turn are encoded as vectors and can gather information that is lost in physical mediums, such as coordinates, drawing order, and temporality, even capturing things not present on paper such as hover gestures and pressure.
Yet it's common practice to reduce gestures and clicks to points, lines, or curves - two clicks from thousands of users in a CAD environment may output the exact same line from point A to point B - which often forget about the expressiveness of pencil strokes on a sheet of paper, features which could be used by machine learning algorithms to discern intent in the input of different users.
[…]
"Burning through pages" is a great way to put it. No need to worry about sheets of paper lying around or scanning and document what you do. It's all there. I especially like that, with the "fineliner," strokes are vector polylines (as opposed to shapes in other brushes) that can be used later to simplify, redraw, and animate documents.
According to OpenAI, "embeddings are numerical representations of concepts converted to number sequences, which make it easy for computers to understand the relationships between those concepts."
They introduced a new text and code embeddings API endpoint in January 25, 20221 capable of measuring the relatedness of text strings.
Here's a list of common uses of text embeddings, as listed in OpenAI's documentation.
I look forward to testing this API on my writing to see how well it recommends, classifies, and clusters my mini-essays.
Text and Code Embeddings by Contrastive Pre-Training. OpenAI. Jan 25, 2022. ↩
Here are my highlights from The Making of Toy Story, a paper published in 1996 by Mark Henne, Hal Hickel, Ewan Johnson, and Sonoko Konishi, from Pixar Animation Studios.
History
Modeling
Layout
Animation
Lighting
Production hardware
Conclusions
The fact that we're starting to get access to these huge pretrained models offline, for free, and in our local machines, is insane.
Whisper let's us transcribe and translate, Stable Diffusion generate images, and GPT-like models generate text.
There's a lot more to come.
We're just getting started.
I asked ChatGPT how to deserialize a base64 string in Golang.
Here's ChatGPT's output.
package main
import (
"encoding/base64"
"fmt"
)
func main() {
// The base64 string you want to deserialize
encodedString := "SGVsbG8sIHdvcmxkIQ=="
// Decode the base64 string
decodedBytes, err := base64.StdEncoding.DecodeString(encodedString)
if err != nil {
fmt.Println("Error decoding base64 string:", err)
return
}
// Convert the decoded bytes to a string and print it
decodedString := string(decodedBytes)
fmt.Println(decodedString)
}
This code will print "Hello, world!" to the console.
In this example, we use the base64.StdEncoding type to decode the base64 string. This type is part of the encoding/base64 package, which provides functions for working with base64 strings in Go.
First, we use the DecodeString method to decode the base64 string and store the result in a variable called decodedBytes. This method returns a byte slice containing the decoded data, and an error value.
Next, we convert the decodedBytes slice to a string using the string function. This allows us to print the decoded string to the console.
If any errors occur during the decoding process, we print the error message and return from the function. Otherwise, the decoded string will be printed to the console.