Here's a one-liner to turn any website into dark mode.
body, img { filter: invert(0.92) }
I apply this to selected sites using Stylebot, a Chrome extension that lets you apply custom CSS to specific websites.
In a nutshell, the CSS inverts the entire website and then inverts images again to render them normally.
You can adjust the invert
filter's amount
parameter, which in the example is set to 0.92
.
0
would be no color inversion at all. 100
would be full-color inversion; whites turn black, and blacks turn white.
I often prefer to stay within 90–95% to reduce the contrast.
After two months of pause, we're preparing to release new Getting Simple podcast episodes.
Editing and publishing add friction and delays to my process, so I'm exploring code and ML workflows to post-process of episodes' audio and generate transcripts, summaries & notes.
I'm not there yet. But OpenAI's Whisper (free) and Descript (paid) already provide accurate transcriptions. Existing projects and companies use #GPT-like language models to extract episode keywords, topics, chapters & summaries.
We'll soon have automatic episode notes.
It's exciting. I think we're getting very, very close.
I've also played with Spotify's pedalboard
Python package to post-process audio without relying on a Digital Audio Workstation (DAW).
That's cool because I can create reusable scripts for specific recording conditions and forget about audio editing — say, compressing, limiting, applying noise gates, or normalization—things you'd otherwise do in Adobe Audition.
Let me know if you'd like to see these automations in the live stream and video tutorials or shared here on Twitter at @nonoesp.
The point of writing as a human is to express ourselves. To pour words on paper (or the screen) and reflect on who you are, to learn, to evolve, and to inspire others. You can influence and inspire your future self as well.
Yesterday, I woke up and started the day writing five hundred words before I did any work. This is a practice I follow and will continue to follow. It doesn't make sense to delegate this to a machine because the whole point is to pour things out of my mind. Maybe this can turn into a conversation with an AI in the long run. I talk, we discuss, and my virtual assistant takes notes and generates a document instead of typing at my desk with a keyboard.
Machine intelligence is here to stay, and we'll find it harder to be original as it improves. But we must remember that they work because of all the knowledge humans have created before, with our mistakes and biases. Only they'll get better if we continue to produce original content. That may be a mistaken assumption, but I believe it in some way. AI originality is probably down the road, and current systems can hallucinate. But I like to think we'll do better work together with them. We must wait until everything is stable to identify which parts won't be done by humans anymore. Maybe they will but at a scary-fast pace.
Writing is a medium for creative expression, as are drawing, singing, film, photography, and many, many other forms. Get a pen and write—express yourself. Type with your fingers or thumbs. Shoot a video. Take a photo. Doodle. Tell us a story.
I've installed vnstat
on my M1 MacBook Pro with Homebrew to monitor my network usage over time.
# Install vnstat on macOS with Homebrew.
brew install vnstat
Make sure you start the vnstat
service with brew for vnstat to monitor your network usage.
brew services start vnstat
vnstat
will be running in the background, and you'll have to wait days for it to gather statistics and be able to show you, for instance, the average monthly usage.
› vnstat -m
# gif0: Not enough data available yet.
After a few minutes, you'll see stats on vnstat
.
› vnstat -5
# en0 / 5 minute
#
# time rx | tx | total | avg. rate
# ------------------------+-------------+-------------+---------------
# 2023-03-19
# 12:45 839.44 MiB | 2.60 MiB | 842.04 MiB | 23.55 Mbit/s
# 12:50 226.26 MiB | 306.00 KiB | 226.56 MiB | 46.35 Mbit/s
# ------------------------+-------------+-------------+---------------
› vnstat -h
# en0 / hourly
#
# hour rx | tx | total | avg. rate
# ------------------------+-------------+-------------+---------------
# 2023-03-19
# 12:00 1.04 GiB | 2.90 MiB | 1.04 GiB | 28.10 Mbit/s
# ------------------------+-------------+-------------+---------------
› vnstat -m
# en0 / monthly
#
# month rx | tx | total | avg. rate
# ------------------------+-------------+-------------+---------------
# 2023-03 1.04 GiB | 2.90 MiB | 1.04 GiB | 28.10 Mbit/s
# ------------------------+-------------+-------------+---------------
# estimated 3.43 TiB | 9.56 GiB | 3.44 TiB |
Here are my highlights from Works Containing Material Generated by Artificial Intelligence.
One such recent development is the use of sophisticated artificial intelligence (“AI”) technologies capable of producing expressive material.[5] These technologies “train” on vast quantities of preexisting human-authored works and use inferences from that training to generate new content. Some systems operate in response to a user's textual instruction, called a “prompt.” [6] The resulting output may be textual, visual, or audio, and is determined by the AI based on its design and the material it has been trained on. These technologies, often described as “generative AI,” raise questions about whether the material they produce is protected by copyright, whether works consisting of both human-authored and AI-generated material may be registered, and what information should be provided to the Office by applicants seeking to register them.
[I]n 2018 the Office received an application for a visual work that the applicant described as “autonomously created by a computer algorithm running on a machine.” [7] The application was denied because, based on the applicant's representations in the application, the examiner found that the work contained no human authorship. After a series of administrative appeals, the Office's Review Board issued a final determination affirming that the work could not be registered because it was made “without any creative contribution from a human actor.”
In February 2023, the Office concluded that a graphic novel [9] comprised of human-authored text combined with images generated by the AI service Midjourney constituted a copyrightable work, but that the individual images themselves could not be protected by copyright.
In the Office's view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans.
[I]n the current edition of the Compendium, the Office states that “to qualify as a work of `authorship' a work must be created by a human being” and that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”
Individuals who use AI technology in creating a work may claim copyright protection for their own contributions to that work.
Applicants should not list an AI technology or the company that provided it as an author or co-author simply because they used it when creating their work.
docker run -it -p HOST_PORT:CONTAINER_PORT your-image
When you run services inside of Docker in specific ports, those are internal ports on the virtual container environment. If you want to connect to those services from your machine, you need to expose ports to the outside world explicitly. In short, you need to map TCP ports in the container to ports on the Docker host, which may be your computer. Here's how to do it.
Let's imagine we have a Next.js app running inside our Docker container.
› docker run -it my-app-image
next dev
# ready - started server on 0.0.0.0:3000, url: http://localhost:3000
The site is exposed to port 3000 of the container, but we can't access it from our machine at http://localhost:3000
.
Let's map the port.
› docker run -it -p 1234:3000 my-app-image
next dev
# ready - started server on 0.0.0.0:3000, url: http://localhost:3000
http://localhost:1234
1234
, Docker forwards the communication to port 3000
of the container
You can upload Shorts to YouTube with the YouTube API as you would upload any other video. Simply ensure your video has an aspect ratio of 9:16 and is less than 60 seconds. YouTube will automatically set it as a Short.
Follow this guide to see how to upload videos to YouTube with the YouTube API.
Apple just unlocked new options to price apps: ten-cent steps between $0.10 to $10, fifty cents between $10 and $50, and so on and so forth.
Choose from 900 price points — nearly 10 times the number of price points previously available for paid apps and one-time in-app purchases. These options also offer more flexibility, increasing incrementally across price ranges (for example, every $0.10 up to $10, every $0.50 between $10 and $50, etc.).
Here's how to define simple async
functions in TypeScript.
(async (/*arguments*/) => {/*function logic*/})(/*values*/);
// Define an asynchronous function.
const helloAsync = async() => { console.log("Hey, Async!"); }
// Call it asynchronously.
helloAsync();
(async(text: string) => { console.log(text); })("Hello, Async!")
(async(text: string) => { setTimeout(() => console.log(text), 2000); })("Hello, Async!")
// Say we have an async talk() function that logs text to the console.
const talk = async(text: string) => { console.log(text); }
// And a sleep() function that uses a Promise to wait for milliseconds.
const sleep = (ms: number) => {
return new Promise(resolve => setTimeout(resolve, ms));
}
// We can wrap calls to async functions in an async function.
// Then `await` to execute them synchronously.
(async () => {
await talk(`Hello!`);
await sleep(1000);
await talk(`What's up?`);
await sleep(2000);
await talk(`Bye now!`);
})();
Here's how to list the commits that happened between two tags.
git log --pretty=oneline 0.8.0...0.9.0
The two tags—in this case, 0.8.0
and 0.9.0
—need to exist.
You can list existing tags in a repository as below.
git tag
You can list what packages are installed globally in your system with npm -g list
—shorthand for npm --global list
—whereas you'd list the packages installed in an NPM project with npm list
.
Let's see an example of what the command might return.
npm -g list
# /opt/homebrew/lib
# ├── cross-env@7.0.3
# ├── http-server@14.1.1
# ├── node-gyp@9.3.1
# ├── npm@9.5.0
# ├── pm2@5.2.2
# ├── spoof@2.0.4
# ├── ts-node@10.9.1
# └── typescript@4.9.5
Here are some of the commands we used during the Creative Machine Learning Live 97.
First, create an Anaconda environment or install in your Python install with pip
.
pip install imaginairy
Before running the commands below, I entered an interactive imaginAIry shell.
aimg
🤖🧠> # Commands here
# Upscale an image 4x with Real-ESRGAN.
upscale image.jpg
# Generate an image and animate the diffusion process.
imagine "a sunflower" --gif
# Generate an image and create a GIF comparing it with the original.
imagine "a sunflower" --compare-gif
# Schedule argument values.
edit input.jpg \
--prompt "a sunflower" \
--steps 21 \
--arg-schedule "prompt_strength[6:8:0.5]" \
--compilation-anim gif
Here's how to add NuGet packages from a local source to your Visual Studio project.
local-nugets
).Tools > Options > NuGet Package Manager > Package Sources
.Add
button (the green cross) to create a new Package Source....
) to browse and select the folder you previously created -- local-nugets
in my case -- and then click on Update
.local-nugets
folder, and everything left is to install the package as follows.Project > Manage NuGet Packages > Browse
.Install
.
Here's how to randomize a list of strings in bash.
On macOS, you can use Terminal or iTerm2.
The shuf
command shuffles a list that is "piped" to it.
An easy way to do that is to list a directory's contents with ls
and then shuffle them.
ls ~/Desktop | shuf
The easiest way to shuffle a set of strings is to define an array in bash and shuffle it with shuf
.
WORDS=('Milk' 'Bread' 'Eggs'); shuf -e ${WORDS[@]}
You can use pbcopy
to copy the shuffled list to your clipboard.
WORDS=('Milk' 'Bread' 'Eggs' ); shuf -e ${WORDS[@]} | pbcopy
Another way to randomize a list of strings from bash is to create a text file, in this case named words.txt
, with a string value per line.
Bread
Milk
Chicken
Turkey
Eggs
You can create this file manually or from the command-line with the following command.
echo "Bread\nMilk\nChicken\nTurkey\nEggs" > words.txt
Then, we cat
the contents of words.txt
and shuffle order of the lines with shuf
.
cat words.txt | shuf
# Eggs
# Milk
# Chicken
# Turkey
# Bread
Again, you can save the result to the clipboard with pbcopy
.
cat words.txt | shuf | pbcopy
If you found this useful, let me know!
Here's a Python class that can track and push metrics to AWS CloudWatch.
Metrics are reset to their initial values on creation and when metrics are uploaded to CloudWatch.
# metrics.py
'''
A metrics class ready to track and push metrics to AWS CloudWatch.
'''
from datetime import datetime
import os
import boto3
# CloudWatch metrics namespace.
METRICS_NAMESPACE = 'my_metrics_namespace'
# Duration to wait between metric uploads.
METRICS_UPLOAD_THRESHOLD_SECONDS = 50
class Metrics:
'''
Holds metrics, serializes them to CloudWatch format,
and ingests foreign metric values.
'''
def __init__(self):
self.reset()
def reset(self):
'''
Resets metric values and last upload time.
'''
self.last_upload_time = datetime.now()
# Your custom metrics and initial values
# Note that here we're using 'my_prefix' as
# a custom prefix in case you want this class
# to add a prefix namespace to all its metrics.
self.my_prefix_first_metric = 0
self.my_prefix_second_metric = 0
def to_data(self):
'''
Serializes metrics and their values.
'''
def to_cloudwatch_format(name, value):
return {'MetricName': name, 'Value': value}
result = []
for name, value in vars(self).items():
if name != 'last_upload_time':
result.append(to_cloudwatch_format(name, value))
return result
def ingest(self, metrics, prefix=''):
'''
Adds foreign metric values to this metrics object.
'''
input_metric_names = [attr for attr in dir(metrics)
if not callable(getattr(metrics, attr))
and not attr.startswith("__")]
# Iterate through foreign keys and add metric values.
for metric_name in input_metric_names:
# Get value of foreign metric.
input_metric_value = getattr(metrics, metric_name)
# Get metric key.
metric_key = f'{prefix}_{metric_name}'
# Get metric value.
metric_value = getattr(self, metric_key)
# Add foreign values to this metrics object.
setattr(
self,
metric_key,
input_metric_value + metric_value
)
def upload(self, force=False):
'''
Uploads metrics to CloudWatch when time since last
upload is above a duration or when forced.
'''
# Get time elapsed since last upload.
seconds_since_last_upload = \
(datetime.now() - self.last_upload_time).seconds
# Only upload if duration is greater than threshold,
# or when the force flag is set to True.
if seconds_since_last_upload > 50 or force:
# Upload metrics to CloudWatch.
cloudwatch = boto3.client(
'cloudwatch',
os.getenv('AWS_REGION')
)
cloudwatch.put_metric_data(
Namespace=METRICS_NAMESPACE,
MetricData=self.to_data()
)
# Reset metrics.
self.reset()
To use this class, we just have to instantiate a metrics object, track some metrics, and upload them.
# Create a metrics object.
metrics = Metrics()
# Add values to its metrics.
metrics.my_prefix_first_metric += 3
metrics.my_prefix_second_metric += 1
# Upload metrics to CloudWatch.
metrics.upload(force=True)
If you were processing metrics at a fast pace, you don't want to upload metrics every single time you increase their value, as otherwise CloudWatch will complain. In certain cases, AWS CloudWatch's limit is 5 transactions per second (TPS) per account or AWS Region. When this limit is reached, you'll receive a RateExceeded throttling error.
By calling metrics.upload(force=False)
we only upload once every METRICS_UPLOAD_THRESHOLD_SECONDS
. (In this example, at maximum every 50 seconds.)
import time
# Create a metrics object.
metrics = Metrics()
for i in range(0, 100, 1):
# Wait for illustration purposes,
# as if we were doing work.
time.sleep(1)
# Add values to its metrics.
metrics.my_prefix_first_metric += 3
metrics.my_prefix_second_metric += 1
# Only upload if more than the threshold
# duration has passed since we last uploaded.
metrics.upload()
# Force-upload metrics to CloudWatch once we're done.
metrics.upload(force=True)
Lastly, here's how to ingest foreign metrics with or without a prefix.
# We define a foreign metrics class.
class OtherMetrics:
def __init__(self):
self.reset()
def reset(self):
# Note that here we don't have 'my_prefix'.
self.first_metric = 0
self.second_metric = 0
# We instantiate both metric objects.
metrics = Metrics()
other_metrics = OtherMetrics()
# The foreign metrics track values.
other_metrics.first_metric += 15
other_metrics.second_metric += 3
# Then our main metrics class ingests those metrics.
metrics.ingest(other_metrics, prefix='my_prefix')
# Then our main metrics class has those values.
print(metrics.my_prefix_first_metric)
# Returns 15
print(metrics.my_prefix_second_metric)
# Returns 3
If you found this useful, let me know!
Take a look at other posts about code, Python, and Today I Learned(s).
Here's how to sort a Python dictionary by a key, a property name, of its items. Check this post if you're looking to sort a list of lists instead.
# A list of people
people = [
{'name': 'Nono', 'age': 32, 'location': 'Spain'},
{'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
{'name': 'Phillipe', 'age': 100, 'location': 'France'},
{'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
]
# Sort people by age, ascending
people_sorted_by_age_asc = sorted(people, key=lambda x: x['age'])
print(people_sorted_by_age_asc)
# [
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Phillipe', 'age': 100, 'location': 'France'}
# ]
# Sort people by age, descending
people_sorted_by_age_desc = sorted(people, key=lambda x: -x['age'])
print(people_sorted_by_age_desc)
# [
# {'name': 'Phillipe', 'age': 100, 'location': 'France'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'}
# ]
# Sort people by name, ascending
people_sorted_by_name_desc = sorted(people, key=lambda x: x['name'])
print(people_sorted_by_name_desc)
# [
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Phillipe', 'age': 100, 'location': 'France'}
# ]
You can measure the time elapsed during the execution of Python commands by keeping a reference to the start
time and then subtracting the current
time at any point on your program from that start
time to obtain the duration between two points in time.
from datetime import datetime
import time
# Define the start time.
start = datetime.now()
# Run some code..
time.sleep(2)
# Get the time delta since the start.
elapsed = datetime.now() - start
# datetime.timedelta(seconds=2, microseconds=005088)
# 0:00:02.005088
# Get the seconds since the start.
elapsed_seconds = elapsed.seconds
# 2
Let's create two helper functions to get the current time (i.e. now
) and the elapsed
time at any moment.
# Returns current time
# (and, if provided, prints the event's name)
def now(eventName = ''):
if eventName:
print(f'Started {eventName}..')
return datetime.now()
# Store current time as `start`
start = now()
# Returns time elapsed since `beginning`
# (and, optionally, prints the duration in seconds)
def elapsed(beginning = start, log = False):
duration = datetime.now() - beginning;
if log:
print(f'{duration.seconds}s')
return duration
With those utility functions defined, we can measure the duration of different events.
# Define time to wait
wait_seconds = 2
# Measure duration (while waiting for 2 seconds)
beginning = now(f'{wait_seconds}-second wait.')
# Wait.
time.sleep(wait_seconds)
# Get time delta.
elapsed_time = elapsed(beginning, True)
# Prints 0:00:02.004004
# Get seconds.
elapsed_seconds = elapsed_time.seconds
# Prints 2
# Get microseconds.
elapsed_microseconds = elapsed_time.microseconds
# Prints 4004
If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, Python, React, and TypeScript.
Here's how to sort a Python list by a key of its items. Check this post if you're looking to sort a list of dictionaries instead.
# A list of people
# name, age, location
people = [
['Nono', 32, 'Spain'],
['Alice', 20, 'Wonderland'],
['Phillipe', 100, 'France'],
['Jack', 45, 'Caribbean'],
]
# Sort people by age, ascending
people_sorted_by_age_asc = sorted(people, key=lambda x: x[1])
# [
# ['Alice', 20, 'Wonderland'],
# ['Nono', 32, 'Spain'],
# ['Jack', 45, 'Caribbean'],
# ['Phillipe', 100, 'France']
# ]
# Sort people by age, descending
people_sorted_by_age_desc = sorted(people, key=lambda x: -x[1])
# [
# ['Phillipe', 100, 'France'],
# ['Jack', 45, 'Caribbean'],
# ['Nono', 32, 'Spain'],
# ['Alice', 20, 'Wonderland']
# ]
# Sort people by name, ascending
people_sorted_by_name_desc = sorted(people, key=lambda x: x[0])
# [
# ['Alice', 20, 'Wonderland'],
# ['Jack', 45, 'Caribbean'],
# ['Nono', 32, 'Spain'],
# ['Phillipe', 100, 'France']
# ]
Here's how to read contents from a comma-separated value (CSV) file in Python; maybe a CSV that already exists or a CSV you saved from Python.
import csv
csv_file_path = 'file.csv'
with open(csv_file_path, encoding='utf-8') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
# Print the first five rows
for row in list(csv_reader)[:5]:
print(row)
# Print all rows
for row in list(csv_reader)[:5]:
print(row)
Here's how to generate pseudo-random numbers in Python.
import random
# Random generation seed for reproducible results
seed = 42
# Float
random.Random(seed).uniform(3,10)
# 7.475987589205186
# Integer
int(random.Random(seed).uniform(3,10))
# 7
# Integer
random.Random(seed).randint(0, 999)
# 654
See the random module for more information.
Here's how to pass arguments to a Dockerfile
when building a custom image with Docker.
First, you need to define a Dockerfile
which uses an argument.
# Dockerfile
FROM python
ARG code_dir # Our argument
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./$code_dir /code/
RUN pip install -r requirements.txt
What the above Dockerfile
does is parametrize the location of the directory of script.py
, our Docker image's entry point.
For this example's sake, let's assume our directory structure looks like the following.
project/
Dockerfile
code_a/script.py
code_b/script.py
# code_a/script.py
print('This is code_a!')
# code_b/script.py
print('This is code_b!')
Then you'll pass the code_dir
variable as an argument to docker build
to decide whether the Dockerfile
is going to COPY
folder code_a
or code_b
into our image.
Let's pass code_a
as our code_dir
first.
docker build -t my_image_a --build-arg code_dir=code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
Then code_b
.
docker build -t my_image_b --build-arg code_dir=code_b .
docker run -it my_image_b
# Prints 'This is code_b!'
The objective of this example was to avoid having two different Dockerfiles that look exactly the same but simply specify different source code paths.
We could have done the same with the following two Dockerfiles and specifying which Docker file to use in each case with the -f
flag.
# Dockerfile.code_a
FROM python
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./code_a /code/
RUN pip install -r requirements.txt
# Dockerfile.code_b
FROM python
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./code_b /code/
RUN pip install -r requirements.txt
docker build -t my_image_a -f Dockerfile.code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
docker build -t my_image_b --f Dockerfile.code_b .
docker run -it my_image_b
# Prints 'This is code_b!'
If you found this useful, let me know!
The Google Research team has published a paper for MusicLM, a machine learning model that generates high-fidelity music from text prompts, and it works extremely well. But they won't release it to the public, at least not yet.
You can browse and play through the examples to listen to results obtained by the research team for a wide variety of text-to-music tasks, including audio generation from rich captions, long generation, story mode, text and melody conditioning, painting caption conditioning, 10s audio generation from text, and generation diversity,
I'm particularly surprised by the text and melody conditioning examples, where a text prompt—say, "piano solo," "string quarter," or "tribal drums"—can be combined with a melody prompt—say "bella ciao - humming"—generating accurate results.
Even when they don't release the model, Google Research has publicly released MusicCaps to support future research, "a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts."
Leading zeros are extra zeros to the left of a number when you want to have a regular amount of digits in a set of numbers.
For instance, 0001
, 0002
, and 0003
is a good formatting if you think you'll get to have thousands of entries, as you can stay at four digits up to 9999
.
# Define your number
number = 1
two_digits = f'{number:02d}'
# 01
four_digits = f'{number:04d}'
# 0001
We use the Python formatting helper {my_number:04d}
to enforce a minimum set of digits in our number variable.
This means you can use it to set the value of a string or to create or print a longer string with that number, not necessarily having to store its value.
a_number = 42
print(f'The number is {a_number:06d}.')
# The number is 000042.
print(f'The number is {512:06d}.')
# The number is 000512.
The e
flag/option of pip
"installs a project in editable mode (i.e. setuptools “develop mode”) from a local project path or a VCS url."
pip install -e .
-e, --editable <path/url>
As described in the expanded command flag, -e
stands for editable.
This guide is for macOS Ventura. Check this page for macOS Monterey1.
As is port 3000, port 5000 is commonly used to serve local development servers. When updating to the latest macOS operating system, I noticed my React development server, which I'm serving with React create-serve
was using a port other than 5000, because it was already in use. (You may find a message along the lines of Port 5000 already in use
.)
By running lsof -i :5000
, I found out the process using the port was named ControlCenter
, which is a native macOS application. If this happens to you, even if you use brute force (and kill) the application, it will restart itself. In my laptop, lsof -i :5000
returns that Control Center is being used by process id 433
. I could do killall -p 433
, but macOS keeps restarting the process.
The process running on this port turns out to be an AirPlay server. You can deactivate it in System Settings › General › AirDrop & Handoff and uncheck AirPlay Receiver
to release port 5000
.
As an aside, I just ran into this same issue when trying to run a Node.js server application as of September 13, 2022.
uncaught exception: listen EADDRINUSE: address already in use :::5000
Error: listen EADDRINUSE: address already in use :::5000
If you found this useful, let me know!
I liked a quote from Thoreau in Walden, which I highlighted from Cal Newport's Digital Minimalism. “The cost of a thing is the amount of what I will call life which is required to be exchanged for it, immediately or in the long run.”
It's that time of the year again.
The start of a new year is an important temporal landmark to establish New Year's resolutions, a great time to adopt atomic habits and hone passion projects.
I'll continue to stop overdoing things and ship better content faster, to reduce creative friction by streamlining and automating workflows where possible, and to delegate work to other creatives.
I will always continue to improve and ideate new ways of doing things, but it's good to enjoy established workflows once you reach a good-enough point.
Iterating through a list.
for i in 1 2 3
do
echo $i
done
# 1
# 2
# 3
Iterating through a list generated with a sequence.
for i in $(seq 1 2 10)
do
echo $i
done
# 1
# 3
# 5
# 7
# 9
seq 1 2 10
creates a list of numbers from 1
to 10
in steps of 2
.
Today, I spent a couple hours playing with the Alfa Duo, and I've been able to successfully embroider custom vector drawings.
Here are the steps of my current workflow, still in beta.
.ai
for IllustratorPES
for embroideryPES
to my iPhone via Airdrop
I've been tinkering with Processing 4 and PEmbroider to embroider a few tests with the Alfa Duo.
I'll share some of my experiments, but I want to make sure I have things worth showing and I have a few tests working.
Back in December 2020, I used the JEF format to save embroidery files and they were working in the Alfa Duo machine. (Here's a video of those experiments.) Actually, the test files I saved back then still work with this machine. But the latest version of PEmbroider generates JEF files that make the machine crash. Luckily, I've been able to make my files work with the PES file format.
If you'd be interested in seeing any of this in the live stream and YouTube videos, please let me know on Twitter @nonoesp or message me on the Discord community!