I was running cron
jobs that worked with macOS Mojave, Catalina, Big Sur, Monterey, and Ventura but stopped working after I updated to macOS Sonoma.
Here are two sample errors.
ls: .: Operation not permitted
zip error: Nothing to do! (try: zip -qr9 ~/folder/file.zip . -i *)
An "Operation not permitted" error message when running a cron job on macOS typically signals a permission issue.
cron
cron
requires the proper permissions to access other commands.
You'll need to grant "Full Disk Access" to cron or to the Terminal app to ensure it can execute jobs properly in macOS Sonoma.
Here's how.
System Settings
> Privacy & Security
> Privacy
section.Full Disk Access
from the sidebar./usr/sbin
folder with Finder. (You can do that with CMD + SHIFT + G
and entering the path.)cron
app binary.ChatGPT helped me get to a solution faster.
Say we have a TypeScript interface with required and optional values.
interface NonosOptions {
thickness: number,
pressure?: number
}
thickness
is required, pressure
is optional.
If we create an object of type NonosOptions
, we can omit pressure
but not thickness
.
const options: NonosOptions = {
thickness: 1.5
}
We can now deconstruct our options
with a default pressure value, which will only be used if options
doesn't define a value.
const { thickness = 2, pressure = 0.75 } = options
// thickness = 1.5
// pressure = 0.75
As you can see, thickness
ignores the 2
assignment because options
sets it as 1.5
.
But pressure
is set to 0.75
because options
doesn't define a pressure
value.
If pressure
is defined in options
, both thickness
and pressure
deconstruction fallback values would be ignored.
const options: NonosOptions = {
thickness: 1.5,
pressure: 0.25
}
const { thickness = 2, pressure = 0.75 } = options
// thickness = 1.5
// pressure = 0.25
import * as React from 'react'
import * as Server from 'react-dom/server'
let Greet = () => <h1>Hello, Nono!</h1>
console.log(Server.renderToString(<div><Greet /></div>))
// <div><h1>Hello, Nono!</h1></div>
The tricky part is running this code.
You first need to build it, say, with esbuild
, then execute it.
# Build with esbuild.
esbuild RenderToString.jsx --bundle --outfile=RenderToString.js
# Run with Node.js.
node RenderToString.js
# <div><h1>Hello, Nono!</h1></div>
Here's how to deploy your Vite app to your local network so you can access it from other devices connected to the same WiFi. Say, your iPhone or iPad.
npx vite --host {local-ip-address}
If you're on macOS, you can simply run the following.
npx vite --host $(ipconfig getifaddr en0)
A fresh Vite project will likely have a dev
key in your package.json's scripts
property mapping that Yarn or NPM command to Vite, e.g., "dev": "vite"
so you can type yarn dev
or npm run dev
and have Vite run your application in development mode.
yarn dev
# VITE v4.2.1 ready in 165 ms
#
# ➜ Local: http://localhost:5173/
# ➜ Network: use --host to expose
# ➜ press h to show help
That's the same as running npx vite
or ./node_modules/.bin/vite
.
Before we can deploy to our IP address, we need to know what it is.
You can use ipconfing
on Windows and ifconfig
on macOS.
Henry Black shared a trick to get your Mac's local IP address with ifconfig
.
ipconfig getifaddr en0
# 192.168.1.34
All you need to do is pass your IP address as Vite's --host
argument.
npx vite --host $(ipconfig getifaddr en0)
# VITE v4.2.1 ready in 166 ms
#
# ➜ Local: http://192.168.1.34:5173/
# ➜ Network: http://192.168.1.34:5173/
# ➜ press h to show help
Now I can access my Vite app from other devices in the same network, which comes in handy if you want to test your app on other computers, phones, or tablets.
Remember, npx vite
is interchangeable with yarn dev
, npm run dev
, or ./node_modules/.bin/vite`.
For more information, read Vite's Server Options.
If you found this useful, let me know at @nonoesp!
Here's how to connect and communicate with WebSocket servers from browser client applications using the WebSocket API and the WebSocket protocol.
// Create a WebSocket client in the browser.
const ws = new WebSocket("ws://localhost:1234");
// Log incoming messages to the console.
ws.onmessage = function (event) {
// This runs when receiving message.
console.log(event.data);
};
ws.onopen = () => {
// This runs when we connect.
// Submit a message to the server
ws.send(`Hello, WebSocket! Sent from a browser client.`);
};
Note that if you restart your Droplet you may have to restart services that are running in the background manually.
# Restart the Droplet now
shutdown -r now
In my case, if Nginx doesn't restart automatically after the restart, I need to run the following commands.
sudo fuser -k 80/tcp && sudo fuser -k 443/tcp
sudo service nginx restart
Here's a one-liner to turn any website into dark mode.
body, img { filter: invert(0.92) }
I apply this to selected sites using Stylebot, a Chrome extension that lets you apply custom CSS to specific websites.
In a nutshell, the CSS inverts the entire website and then inverts images again to render them normally.
You can adjust the invert
filter's amount
parameter, which in the example is set to 0.92
.
0
would be no color inversion at all. 100
would be full-color inversion; whites turn black, and blacks turn white.
I often prefer to stay within 90–95% to reduce the contrast.
docker run -it -p HOST_PORT:CONTAINER_PORT your-image
When you run services inside of Docker in specific ports, those are internal ports on the virtual container environment. If you want to connect to those services from your machine, you need to expose ports to the outside world explicitly. In short, you need to map TCP ports in the container to ports on the Docker host, which may be your computer. Here's how to do it.
Let's imagine we have a Next.js app running inside our Docker container.
› docker run -it my-app-image
next dev
# ready - started server on 0.0.0.0:3000, url: http://localhost:3000
The site is exposed to port 3000 of the container, but we can't access it from our machine at http://localhost:3000
.
Let's map the port.
› docker run -it -p 1234:3000 my-app-image
next dev
# ready - started server on 0.0.0.0:3000, url: http://localhost:3000
http://localhost:1234
1234
, Docker forwards the communication to port 3000
of the container
You can upload Shorts to YouTube with the YouTube API as you would upload any other video. Simply ensure your video has an aspect ratio of 9:16 and is less than 60 seconds. YouTube will automatically set it as a Short.
Follow this guide to see how to upload videos to YouTube with the YouTube API.
Here's how to define simple async
functions in TypeScript.
(async (/*arguments*/) => {/*function logic*/})(/*values*/);
// Define an asynchronous function.
const helloAsync = async() => { console.log("Hey, Async!"); }
// Call it asynchronously.
helloAsync();
(async(text: string) => { console.log(text); })("Hello, Async!")
(async(text: string) => { setTimeout(() => console.log(text), 2000); })("Hello, Async!")
// Say we have an async talk() function that logs text to the console.
const talk = async(text: string) => { console.log(text); }
// And a sleep() function that uses a Promise to wait for milliseconds.
const sleep = (ms: number) => {
return new Promise(resolve => setTimeout(resolve, ms));
}
// We can wrap calls to async functions in an async function.
// Then `await` to execute them synchronously.
(async () => {
await talk(`Hello!`);
await sleep(1000);
await talk(`What's up?`);
await sleep(2000);
await talk(`Bye now!`);
})();
Here's how to randomize a list of strings in bash.
On macOS, you can use Terminal or iTerm2.
The shuf
command shuffles a list that is "piped" to it.
An easy way to do that is to list a directory's contents with ls
and then shuffle them.
ls ~/Desktop | shuf
The easiest way to shuffle a set of strings is to define an array in bash and shuffle it with shuf
.
WORDS=('Milk' 'Bread' 'Eggs'); shuf -e ${WORDS[@]}
You can use pbcopy
to copy the shuffled list to your clipboard.
WORDS=('Milk' 'Bread' 'Eggs' ); shuf -e ${WORDS[@]} | pbcopy
Another way to randomize a list of strings from bash is to create a text file, in this case named words.txt
, with a string value per line.
Bread
Milk
Chicken
Turkey
Eggs
You can create this file manually or from the command-line with the following command.
echo "Bread\nMilk\nChicken\nTurkey\nEggs" > words.txt
Then, we cat
the contents of words.txt
and shuffle order of the lines with shuf
.
cat words.txt | shuf
# Eggs
# Milk
# Chicken
# Turkey
# Bread
Again, you can save the result to the clipboard with pbcopy
.
cat words.txt | shuf | pbcopy
If you found this useful, let me know!
Here's a Python class that can track and push metrics to AWS CloudWatch.
Metrics are reset to their initial values on creation and when metrics are uploaded to CloudWatch.
# metrics.py
'''
A metrics class ready to track and push metrics to AWS CloudWatch.
'''
from datetime import datetime
import os
import boto3
# CloudWatch metrics namespace.
METRICS_NAMESPACE = 'my_metrics_namespace'
# Duration to wait between metric uploads.
METRICS_UPLOAD_THRESHOLD_SECONDS = 50
class Metrics:
'''
Holds metrics, serializes them to CloudWatch format,
and ingests foreign metric values.
'''
def __init__(self):
self.reset()
def reset(self):
'''
Resets metric values and last upload time.
'''
self.last_upload_time = datetime.now()
# Your custom metrics and initial values
# Note that here we're using 'my_prefix' as
# a custom prefix in case you want this class
# to add a prefix namespace to all its metrics.
self.my_prefix_first_metric = 0
self.my_prefix_second_metric = 0
def to_data(self):
'''
Serializes metrics and their values.
'''
def to_cloudwatch_format(name, value):
return {'MetricName': name, 'Value': value}
result = []
for name, value in vars(self).items():
if name != 'last_upload_time':
result.append(to_cloudwatch_format(name, value))
return result
def ingest(self, metrics, prefix=''):
'''
Adds foreign metric values to this metrics object.
'''
input_metric_names = [attr for attr in dir(metrics)
if not callable(getattr(metrics, attr))
and not attr.startswith("__")]
# Iterate through foreign keys and add metric values.
for metric_name in input_metric_names:
# Get value of foreign metric.
input_metric_value = getattr(metrics, metric_name)
# Get metric key.
metric_key = f'{prefix}_{metric_name}'
# Get metric value.
metric_value = getattr(self, metric_key)
# Add foreign values to this metrics object.
setattr(
self,
metric_key,
input_metric_value + metric_value
)
def upload(self, force=False):
'''
Uploads metrics to CloudWatch when time since last
upload is above a duration or when forced.
'''
# Get time elapsed since last upload.
seconds_since_last_upload = \
(datetime.now() - self.last_upload_time).seconds
# Only upload if duration is greater than threshold,
# or when the force flag is set to True.
if seconds_since_last_upload > 50 or force:
# Upload metrics to CloudWatch.
cloudwatch = boto3.client(
'cloudwatch',
os.getenv('AWS_REGION')
)
cloudwatch.put_metric_data(
Namespace=METRICS_NAMESPACE,
MetricData=self.to_data()
)
# Reset metrics.
self.reset()
To use this class, we just have to instantiate a metrics object, track some metrics, and upload them.
# Create a metrics object.
metrics = Metrics()
# Add values to its metrics.
metrics.my_prefix_first_metric += 3
metrics.my_prefix_second_metric += 1
# Upload metrics to CloudWatch.
metrics.upload(force=True)
If you were processing metrics at a fast pace, you don't want to upload metrics every single time you increase their value, as otherwise CloudWatch will complain. In certain cases, AWS CloudWatch's limit is 5 transactions per second (TPS) per account or AWS Region. When this limit is reached, you'll receive a RateExceeded throttling error.
By calling metrics.upload(force=False)
we only upload once every METRICS_UPLOAD_THRESHOLD_SECONDS
. (In this example, at maximum every 50 seconds.)
import time
# Create a metrics object.
metrics = Metrics()
for i in range(0, 100, 1):
# Wait for illustration purposes,
# as if we were doing work.
time.sleep(1)
# Add values to its metrics.
metrics.my_prefix_first_metric += 3
metrics.my_prefix_second_metric += 1
# Only upload if more than the threshold
# duration has passed since we last uploaded.
metrics.upload()
# Force-upload metrics to CloudWatch once we're done.
metrics.upload(force=True)
Lastly, here's how to ingest foreign metrics with or without a prefix.
# We define a foreign metrics class.
class OtherMetrics:
def __init__(self):
self.reset()
def reset(self):
# Note that here we don't have 'my_prefix'.
self.first_metric = 0
self.second_metric = 0
# We instantiate both metric objects.
metrics = Metrics()
other_metrics = OtherMetrics()
# The foreign metrics track values.
other_metrics.first_metric += 15
other_metrics.second_metric += 3
# Then our main metrics class ingests those metrics.
metrics.ingest(other_metrics, prefix='my_prefix')
# Then our main metrics class has those values.
print(metrics.my_prefix_first_metric)
# Returns 15
print(metrics.my_prefix_second_metric)
# Returns 3
If you found this useful, let me know!
Take a look at other posts about code, Python, and Today I Learned(s).
Here's how to sort a Python dictionary by a key, a property name, of its items. Check this post if you're looking to sort a list of lists instead.
# A list of people
people = [
{'name': 'Nono', 'age': 32, 'location': 'Spain'},
{'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
{'name': 'Phillipe', 'age': 100, 'location': 'France'},
{'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
]
# Sort people by age, ascending
people_sorted_by_age_asc = sorted(people, key=lambda x: x['age'])
print(people_sorted_by_age_asc)
# [
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Phillipe', 'age': 100, 'location': 'France'}
# ]
# Sort people by age, descending
people_sorted_by_age_desc = sorted(people, key=lambda x: -x['age'])
print(people_sorted_by_age_desc)
# [
# {'name': 'Phillipe', 'age': 100, 'location': 'France'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'}
# ]
# Sort people by name, ascending
people_sorted_by_name_desc = sorted(people, key=lambda x: x['name'])
print(people_sorted_by_name_desc)
# [
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Phillipe', 'age': 100, 'location': 'France'}
# ]
You can measure the time elapsed during the execution of Python commands by keeping a reference to the start
time and then subtracting the current
time at any point on your program from that start
time to obtain the duration between two points in time.
from datetime import datetime
import time
# Define the start time.
start = datetime.now()
# Run some code..
time.sleep(2)
# Get the time delta since the start.
elapsed = datetime.now() - start
# datetime.timedelta(seconds=2, microseconds=005088)
# 0:00:02.005088
# Get the seconds since the start.
elapsed_seconds = elapsed.seconds
# 2
Let's create two helper functions to get the current time (i.e. now
) and the elapsed
time at any moment.
# Returns current time
# (and, if provided, prints the event's name)
def now(eventName = ''):
if eventName:
print(f'Started {eventName}..')
return datetime.now()
# Store current time as `start`
start = now()
# Returns time elapsed since `beginning`
# (and, optionally, prints the duration in seconds)
def elapsed(beginning = start, log = False):
duration = datetime.now() - beginning;
if log:
print(f'{duration.seconds}s')
return duration
With those utility functions defined, we can measure the duration of different events.
# Define time to wait
wait_seconds = 2
# Measure duration (while waiting for 2 seconds)
beginning = now(f'{wait_seconds}-second wait.')
# Wait.
time.sleep(wait_seconds)
# Get time delta.
elapsed_time = elapsed(beginning, True)
# Prints 0:00:02.004004
# Get seconds.
elapsed_seconds = elapsed_time.seconds
# Prints 2
# Get microseconds.
elapsed_microseconds = elapsed_time.microseconds
# Prints 4004
If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, Python, React, and TypeScript.
Here's how to sort a Python list by a key of its items. Check this post if you're looking to sort a list of dictionaries instead.
# A list of people
# name, age, location
people = [
['Nono', 32, 'Spain'],
['Alice', 20, 'Wonderland'],
['Phillipe', 100, 'France'],
['Jack', 45, 'Caribbean'],
]
# Sort people by age, ascending
people_sorted_by_age_asc = sorted(people, key=lambda x: x[1])
# [
# ['Alice', 20, 'Wonderland'],
# ['Nono', 32, 'Spain'],
# ['Jack', 45, 'Caribbean'],
# ['Phillipe', 100, 'France']
# ]
# Sort people by age, descending
people_sorted_by_age_desc = sorted(people, key=lambda x: -x[1])
# [
# ['Phillipe', 100, 'France'],
# ['Jack', 45, 'Caribbean'],
# ['Nono', 32, 'Spain'],
# ['Alice', 20, 'Wonderland']
# ]
# Sort people by name, ascending
people_sorted_by_name_desc = sorted(people, key=lambda x: x[0])
# [
# ['Alice', 20, 'Wonderland'],
# ['Jack', 45, 'Caribbean'],
# ['Nono', 32, 'Spain'],
# ['Phillipe', 100, 'France']
# ]
Here's how to read contents from a comma-separated value (CSV) file in Python; maybe a CSV that already exists or a CSV you saved from Python.
import csv
csv_file_path = 'file.csv'
with open(csv_file_path, encoding='utf-8') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
# Print the first five rows
for row in list(csv_reader)[:5]:
print(row)
# Print all rows
for row in list(csv_reader)[:5]:
print(row)
Here's how to generate pseudo-random numbers in Python.
import random
# Random generation seed for reproducible results
seed = 42
# Float
random.Random(seed).uniform(3,10)
# 7.475987589205186
# Integer
int(random.Random(seed).uniform(3,10))
# 7
# Integer
random.Random(seed).randint(0, 999)
# 654
See the random module for more information.
Here's how to pass arguments to a Dockerfile
when building a custom image with Docker.
First, you need to define a Dockerfile
which uses an argument.
# Dockerfile
FROM python
ARG code_dir # Our argument
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./$code_dir /code/
RUN pip install -r requirements.txt
What the above Dockerfile
does is parametrize the location of the directory of script.py
, our Docker image's entry point.
For this example's sake, let's assume our directory structure looks like the following.
project/
Dockerfile
code_a/script.py
code_b/script.py
# code_a/script.py
print('This is code_a!')
# code_b/script.py
print('This is code_b!')
Then you'll pass the code_dir
variable as an argument to docker build
to decide whether the Dockerfile
is going to COPY
folder code_a
or code_b
into our image.
Let's pass code_a
as our code_dir
first.
docker build -t my_image_a --build-arg code_dir=code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
Then code_b
.
docker build -t my_image_b --build-arg code_dir=code_b .
docker run -it my_image_b
# Prints 'This is code_b!'
The objective of this example was to avoid having two different Dockerfiles that look exactly the same but simply specify different source code paths.
We could have done the same with the following two Dockerfiles and specifying which Docker file to use in each case with the -f
flag.
# Dockerfile.code_a
FROM python
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./code_a /code/
RUN pip install -r requirements.txt
# Dockerfile.code_b
FROM python
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./code_b /code/
RUN pip install -r requirements.txt
docker build -t my_image_a -f Dockerfile.code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
docker build -t my_image_b --f Dockerfile.code_b .
docker run -it my_image_b
# Prints 'This is code_b!'
If you found this useful, let me know!
Leading zeros are extra zeros to the left of a number when you want to have a regular amount of digits in a set of numbers.
For instance, 0001
, 0002
, and 0003
is a good formatting if you think you'll get to have thousands of entries, as you can stay at four digits up to 9999
.
# Define your number
number = 1
two_digits = f'{number:02d}'
# 01
four_digits = f'{number:04d}'
# 0001
We use the Python formatting helper {my_number:04d}
to enforce a minimum set of digits in our number variable.
This means you can use it to set the value of a string or to create or print a longer string with that number, not necessarily having to store its value.
a_number = 42
print(f'The number is {a_number:06d}.')
# The number is 000042.
print(f'The number is {512:06d}.')
# The number is 000512.
The e
flag/option of pip
"installs a project in editable mode (i.e. setuptools “develop mode”) from a local project path or a VCS url."
pip install -e .
-e, --editable <path/url>
As described in the expanded command flag, -e
stands for editable.
Today I learned you can use the plus (+) operator to concatenate or extend lists in Python.
Say you have two lists.
list_a = [1, 2, 3]
list_b = ['Nono', 'MA']
And that you want to create a continuous list with the contents of both, which would look something like [1, 2, 3, 'Nono', 'MA']
.
You can simple add both lists to obtain that result.
>>> combined_list = [1, 2, 3] + ['Nono', 'MA']
>>> combined_list
[1, 2, 3, 'Nono', 'MA']
Of course, it doesn't too much sense in this example because we're explicitly defining the lists and we could define a combined list directly.
combined_list = [1, 2, 3, 'Nono', 'MA']
But it can be useful when we actually need to add lists, for instance to concatenate the results of glob
file listing operations.
>>> from glob import glob
>>> files_a = glob('a/*')
>>> files_a
['a/file.txt', 'a/image.jpg']
>>> files_b = glob('b/*')
>>> files_b
['b/data.json', 'b/profile.jpeg']
>>> all_files = files_a + files_b
>>> all_files
['a/file.txt', 'a/image.jpg', 'b/data.json', 'b/profile.jpeg']
If you try to serialize a NumPy array to JSON in Python, you'll get the error below.
TypeError: Object of type ndarray is not JSON serializable
Luckily, NumPy has a built-in method to convert one- or multi-dimensional arrays to lists, which are in turn JSON serializable.
import numpy as np
import json
# Define your NumPy array
arr = np.array([[100,200],[300,400]])
# Convert the array to list
arr_as_list = arr.tolist()
# Serialize as JSON
json.dumps(arr_as_list)
# '[[100, 200], [300, 400]]'
Here's the error I was getting when trying to return a NumPy ndarray
in the response body of an AWS Lambda function.
Object of type ndarray is not JSON serializable
import numpy as np
import json
# A NumPy array
arr = np.array([[1,2,3],[4,5,6]])
.astype(np.float64)
# Serialize the array
json.dumps(arr)
# TypeError: Object of type ndarray is not JSON serializable
NumPy arrays provide a built-in method to convert them to lists called .tolist()
.
import numpy as np
import json
# A NumPy array
arr = np.array([[1,2,3],[4,5,6.78]])
.astype(np.float64)
# Convert the NumPy array to a list
arr_as_list = arr.tolist()
# Serialize the list
json.dumps(arr_as_list)
Here's how to obtain the numerical value of the chmod permissions of a file on macOS. (Note that this method also works for directories.)
stat -f %A file.md
# 755
Let's give it a try.
First, create a new file and set its permissions.
touch text.md
chmod 777 text.md
Then we can retrieve its chmod number.
stat -f %A text.md
# 777
And here's how to retrieve it's chmod string value.
ls -n text.md
# -rwxrwxrwx 1 501 20 0 Jun 21 13:53 text.md*
This method also works for directories.