docker run -it -p HOST_PORT:CONTAINER_PORT your-image
When you run services inside of Docker in specific ports, those are internal ports on the virtual container environment. If you want to connect to those services from your machine, you need to expose ports to the outside world explicitly. In short, you need to map TCP ports in the container to ports on the Docker host, which may be your computer. Here's how to do it.
Let's imagine we have a Next.js app running inside our Docker container.
› docker run -it my-app-image
next dev
# ready - started server on 0.0.0.0:3000, url: http://localhost:3000
The site is exposed to port 3000 of the container, but we can't access it from our machine at http://localhost:3000
.
Let's map the port.
› docker run -it -p 1234:3000 my-app-image
next dev
# ready - started server on 0.0.0.0:3000, url: http://localhost:3000
http://localhost:1234
1234
, Docker forwards the communication to port 3000
of the container
You can upload Shorts to YouTube with the YouTube API as you would upload any other video. Simply ensure your video has an aspect ratio of 9:16 and is less than 60 seconds. YouTube will automatically set it as a Short.
Follow this guide to see how to upload videos to YouTube with the YouTube API.
Here's how to define simple async
functions in TypeScript.
(async (/*arguments*/) => {/*function logic*/})(/*values*/);
// Define an asynchronous function.
const helloAsync = async() => { console.log("Hey, Async!"); }
// Call it asynchronously.
helloAsync();
(async(text: string) => { console.log(text); })("Hello, Async!")
(async(text: string) => { setTimeout(() => console.log(text), 2000); })("Hello, Async!")
// Say we have an async talk() function that logs text to the console.
const talk = async(text: string) => { console.log(text); }
// And a sleep() function that uses a Promise to wait for milliseconds.
const sleep = (ms: number) => {
return new Promise(resolve => setTimeout(resolve, ms));
}
// We can wrap calls to async functions in an async function.
// Then `await` to execute them synchronously.
(async () => {
await talk(`Hello!`);
await sleep(1000);
await talk(`What's up?`);
await sleep(2000);
await talk(`Bye now!`);
})();
Here's how to list the commits that happened between two tags.
git log --pretty=oneline 0.8.0...0.9.0
The two tags—in this case, 0.8.0
and 0.9.0
—need to exist.
You can list existing tags in a repository as below.
git tag
You can list what packages are installed globally in your system with npm -g list
—shorthand for npm --global list
—whereas you'd list the packages installed in an NPM project with npm list
.
Let's see an example of what the command might return.
npm -g list
# /opt/homebrew/lib
# ├── cross-env@7.0.3
# ├── http-server@14.1.1
# ├── node-gyp@9.3.1
# ├── npm@9.5.0
# ├── pm2@5.2.2
# ├── spoof@2.0.4
# ├── ts-node@10.9.1
# └── typescript@4.9.5
Here are some of the commands we used during the Creative Machine Learning Live 97.
First, create an Anaconda environment or install in your Python install with pip
.
pip install imaginairy
Before running the commands below, I entered an interactive imaginAIry shell.
aimg
🤖🧠> # Commands here
# Upscale an image 4x with Real-ESRGAN.
upscale image.jpg
# Generate an image and animate the diffusion process.
imagine "a sunflower" --gif
# Generate an image and create a GIF comparing it with the original.
imagine "a sunflower" --compare-gif
# Schedule argument values.
edit input.jpg \
--prompt "a sunflower" \
--steps 21 \
--arg-schedule "prompt_strength[6:8:0.5]" \
--compilation-anim gif
Here's how to add NuGet packages from a local source to your Visual Studio project.
local-nugets
).Tools > Options > NuGet Package Manager > Package Sources
.Add
button (the green cross) to create a new Package Source....
) to browse and select the folder you previously created -- local-nugets
in my case -- and then click on Update
.local-nugets
folder, and everything left is to install the package as follows.Project > Manage NuGet Packages > Browse
.Install
.
Here's how to randomize a list of strings in bash.
On macOS, you can use Terminal or iTerm2.
The shuf
command shuffles a list that is "piped" to it.
An easy way to do that is to list a directory's contents with ls
and then shuffle them.
ls ~/Desktop | shuf
The easiest way to shuffle a set of strings is to define an array in bash and shuffle it with shuf
.
WORDS=('Milk' 'Bread' 'Eggs'); shuf -e ${WORDS[@]}
You can use pbcopy
to copy the shuffled list to your clipboard.
WORDS=('Milk' 'Bread' 'Eggs' ); shuf -e ${WORDS[@]} | pbcopy
Another way to randomize a list of strings from bash is to create a text file, in this case named words.txt
, with a string value per line.
Bread
Milk
Chicken
Turkey
Eggs
You can create this file manually or from the command-line with the following command.
echo "Bread\nMilk\nChicken\nTurkey\nEggs" > words.txt
Then, we cat
the contents of words.txt
and shuffle order of the lines with shuf
.
cat words.txt | shuf
# Eggs
# Milk
# Chicken
# Turkey
# Bread
Again, you can save the result to the clipboard with pbcopy
.
cat words.txt | shuf | pbcopy
If you found this useful, let me know!
Here's a Python class that can track and push metrics to AWS CloudWatch.
Metrics are reset to their initial values on creation and when metrics are uploaded to CloudWatch.
# metrics.py
'''
A metrics class ready to track and push metrics to AWS CloudWatch.
'''
from datetime import datetime
import os
import boto3
# CloudWatch metrics namespace.
METRICS_NAMESPACE = 'my_metrics_namespace'
# Duration to wait between metric uploads.
METRICS_UPLOAD_THRESHOLD_SECONDS = 50
class Metrics:
'''
Holds metrics, serializes them to CloudWatch format,
and ingests foreign metric values.
'''
def __init__(self):
self.reset()
def reset(self):
'''
Resets metric values and last upload time.
'''
self.last_upload_time = datetime.now()
# Your custom metrics and initial values
# Note that here we're using 'my_prefix' as
# a custom prefix in case you want this class
# to add a prefix namespace to all its metrics.
self.my_prefix_first_metric = 0
self.my_prefix_second_metric = 0
def to_data(self):
'''
Serializes metrics and their values.
'''
def to_cloudwatch_format(name, value):
return {'MetricName': name, 'Value': value}
result = []
for name, value in vars(self).items():
if name != 'last_upload_time':
result.append(to_cloudwatch_format(name, value))
return result
def ingest(self, metrics, prefix=''):
'''
Adds foreign metric values to this metrics object.
'''
input_metric_names = [attr for attr in dir(metrics)
if not callable(getattr(metrics, attr))
and not attr.startswith("__")]
# Iterate through foreign keys and add metric values.
for metric_name in input_metric_names:
# Get value of foreign metric.
input_metric_value = getattr(metrics, metric_name)
# Get metric key.
metric_key = f'{prefix}_{metric_name}'
# Get metric value.
metric_value = getattr(self, metric_key)
# Add foreign values to this metrics object.
setattr(
self,
metric_key,
input_metric_value + metric_value
)
def upload(self, force=False):
'''
Uploads metrics to CloudWatch when time since last
upload is above a duration or when forced.
'''
# Get time elapsed since last upload.
seconds_since_last_upload = \
(datetime.now() - self.last_upload_time).seconds
# Only upload if duration is greater than threshold,
# or when the force flag is set to True.
if seconds_since_last_upload > 50 or force:
# Upload metrics to CloudWatch.
cloudwatch = boto3.client(
'cloudwatch',
os.getenv('AWS_REGION')
)
cloudwatch.put_metric_data(
Namespace=METRICS_NAMESPACE,
MetricData=self.to_data()
)
# Reset metrics.
self.reset()
To use this class, we just have to instantiate a metrics object, track some metrics, and upload them.
# Create a metrics object.
metrics = Metrics()
# Add values to its metrics.
metrics.my_prefix_first_metric += 3
metrics.my_prefix_second_metric += 1
# Upload metrics to CloudWatch.
metrics.upload(force=True)
If you were processing metrics at a fast pace, you don't want to upload metrics every single time you increase their value, as otherwise CloudWatch will complain. In certain cases, AWS CloudWatch's limit is 5 transactions per second (TPS) per account or AWS Region. When this limit is reached, you'll receive a RateExceeded throttling error.
By calling metrics.upload(force=False)
we only upload once every METRICS_UPLOAD_THRESHOLD_SECONDS
. (In this example, at maximum every 50 seconds.)
import time
# Create a metrics object.
metrics = Metrics()
for i in range(0, 100, 1):
# Wait for illustration purposes,
# as if we were doing work.
time.sleep(1)
# Add values to its metrics.
metrics.my_prefix_first_metric += 3
metrics.my_prefix_second_metric += 1
# Only upload if more than the threshold
# duration has passed since we last uploaded.
metrics.upload()
# Force-upload metrics to CloudWatch once we're done.
metrics.upload(force=True)
Lastly, here's how to ingest foreign metrics with or without a prefix.
# We define a foreign metrics class.
class OtherMetrics:
def __init__(self):
self.reset()
def reset(self):
# Note that here we don't have 'my_prefix'.
self.first_metric = 0
self.second_metric = 0
# We instantiate both metric objects.
metrics = Metrics()
other_metrics = OtherMetrics()
# The foreign metrics track values.
other_metrics.first_metric += 15
other_metrics.second_metric += 3
# Then our main metrics class ingests those metrics.
metrics.ingest(other_metrics, prefix='my_prefix')
# Then our main metrics class has those values.
print(metrics.my_prefix_first_metric)
# Returns 15
print(metrics.my_prefix_second_metric)
# Returns 3
If you found this useful, let me know!
Take a look at other posts about code, Python, and Today I Learned(s).
Here's how to sort a Python dictionary by a key, a property name, of its items. Check this post if you're looking to sort a list of lists instead.
# A list of people
people = [
{'name': 'Nono', 'age': 32, 'location': 'Spain'},
{'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
{'name': 'Phillipe', 'age': 100, 'location': 'France'},
{'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
]
# Sort people by age, ascending
people_sorted_by_age_asc = sorted(people, key=lambda x: x['age'])
print(people_sorted_by_age_asc)
# [
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Phillipe', 'age': 100, 'location': 'France'}
# ]
# Sort people by age, descending
people_sorted_by_age_desc = sorted(people, key=lambda x: -x['age'])
print(people_sorted_by_age_desc)
# [
# {'name': 'Phillipe', 'age': 100, 'location': 'France'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'}
# ]
# Sort people by name, ascending
people_sorted_by_name_desc = sorted(people, key=lambda x: x['name'])
print(people_sorted_by_name_desc)
# [
# {'name': 'Alice', 'age': 20, 'location': 'Wonderland'},
# {'name': 'Jack', 'age': 45, 'location': 'Caribbean'},
# {'name': 'Nono', 'age': 32, 'location': 'Spain'},
# {'name': 'Phillipe', 'age': 100, 'location': 'France'}
# ]
You can measure the time elapsed during the execution of Python commands by keeping a reference to the start
time and then subtracting the current
time at any point on your program from that start
time to obtain the duration between two points in time.
from datetime import datetime
import time
# Define the start time.
start = datetime.now()
# Run some code..
time.sleep(2)
# Get the time delta since the start.
elapsed = datetime.now() - start
# datetime.timedelta(seconds=2, microseconds=005088)
# 0:00:02.005088
# Get the seconds since the start.
elapsed_seconds = elapsed.seconds
# 2
Let's create two helper functions to get the current time (i.e. now
) and the elapsed
time at any moment.
# Returns current time
# (and, if provided, prints the event's name)
def now(eventName = ''):
if eventName:
print(f'Started {eventName}..')
return datetime.now()
# Store current time as `start`
start = now()
# Returns time elapsed since `beginning`
# (and, optionally, prints the duration in seconds)
def elapsed(beginning = start, log = False):
duration = datetime.now() - beginning;
if log:
print(f'{duration.seconds}s')
return duration
With those utility functions defined, we can measure the duration of different events.
# Define time to wait
wait_seconds = 2
# Measure duration (while waiting for 2 seconds)
beginning = now(f'{wait_seconds}-second wait.')
# Wait.
time.sleep(wait_seconds)
# Get time delta.
elapsed_time = elapsed(beginning, True)
# Prints 0:00:02.004004
# Get seconds.
elapsed_seconds = elapsed_time.seconds
# Prints 2
# Get microseconds.
elapsed_microseconds = elapsed_time.microseconds
# Prints 4004
If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, Python, React, and TypeScript.
Here's how to sort a Python list by a key of its items. Check this post if you're looking to sort a list of dictionaries instead.
# A list of people
# name, age, location
people = [
['Nono', 32, 'Spain'],
['Alice', 20, 'Wonderland'],
['Phillipe', 100, 'France'],
['Jack', 45, 'Caribbean'],
]
# Sort people by age, ascending
people_sorted_by_age_asc = sorted(people, key=lambda x: x[1])
# [
# ['Alice', 20, 'Wonderland'],
# ['Nono', 32, 'Spain'],
# ['Jack', 45, 'Caribbean'],
# ['Phillipe', 100, 'France']
# ]
# Sort people by age, descending
people_sorted_by_age_desc = sorted(people, key=lambda x: -x[1])
# [
# ['Phillipe', 100, 'France'],
# ['Jack', 45, 'Caribbean'],
# ['Nono', 32, 'Spain'],
# ['Alice', 20, 'Wonderland']
# ]
# Sort people by name, ascending
people_sorted_by_name_desc = sorted(people, key=lambda x: x[0])
# [
# ['Alice', 20, 'Wonderland'],
# ['Jack', 45, 'Caribbean'],
# ['Nono', 32, 'Spain'],
# ['Phillipe', 100, 'France']
# ]
Here's how to read contents from a comma-separated value (CSV) file in Python; maybe a CSV that already exists or a CSV you saved from Python.
import csv
csv_file_path = 'file.csv'
with open(csv_file_path, encoding='utf-8') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
# Print the first five rows
for row in list(csv_reader)[:5]:
print(row)
# Print all rows
for row in list(csv_reader)[:5]:
print(row)
Here's how to generate pseudo-random numbers in Python.
import random
# Random generation seed for reproducible results
seed = 42
# Float
random.Random(seed).uniform(3,10)
# 7.475987589205186
# Integer
int(random.Random(seed).uniform(3,10))
# 7
# Integer
random.Random(seed).randint(0, 999)
# 654
See the random module for more information.
Here's how to pass arguments to a Dockerfile
when building a custom image with Docker.
First, you need to define a Dockerfile
which uses an argument.
# Dockerfile
FROM python
ARG code_dir # Our argument
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./$code_dir /code/
RUN pip install -r requirements.txt
What the above Dockerfile
does is parametrize the location of the directory of script.py
, our Docker image's entry point.
For this example's sake, let's assume our directory structure looks like the following.
project/
Dockerfile
code_a/script.py
code_b/script.py
# code_a/script.py
print('This is code_a!')
# code_b/script.py
print('This is code_b!')
Then you'll pass the code_dir
variable as an argument to docker build
to decide whether the Dockerfile
is going to COPY
folder code_a
or code_b
into our image.
Let's pass code_a
as our code_dir
first.
docker build -t my_image_a --build-arg code_dir=code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
Then code_b
.
docker build -t my_image_b --build-arg code_dir=code_b .
docker run -it my_image_b
# Prints 'This is code_b!'
The objective of this example was to avoid having two different Dockerfiles that look exactly the same but simply specify different source code paths.
We could have done the same with the following two Dockerfiles and specifying which Docker file to use in each case with the -f
flag.
# Dockerfile.code_a
FROM python
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./code_a /code/
RUN pip install -r requirements.txt
# Dockerfile.code_b
FROM python
WORKDIR /code/
ENTRYPOINT ["python", "/code/script.py"]
COPY ./code_b /code/
RUN pip install -r requirements.txt
docker build -t my_image_a -f Dockerfile.code_a .
docker run -it my_image_a
# Prints 'This is code_a!'
docker build -t my_image_b --f Dockerfile.code_b .
docker run -it my_image_b
# Prints 'This is code_b!'
If you found this useful, let me know!
The Google Research team has published a paper for MusicLM, a machine learning model that generates high-fidelity music from text prompts, and it works extremely well. But they won't release it to the public, at least not yet.
You can browse and play through the examples to listen to results obtained by the research team for a wide variety of text-to-music tasks, including audio generation from rich captions, long generation, story mode, text and melody conditioning, painting caption conditioning, 10s audio generation from text, and generation diversity,
I'm particularly surprised by the text and melody conditioning examples, where a text prompt—say, "piano solo," "string quarter," or "tribal drums"—can be combined with a melody prompt—say "bella ciao - humming"—generating accurate results.
Even when they don't release the model, Google Research has publicly released MusicCaps to support future research, "a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts."
Leading zeros are extra zeros to the left of a number when you want to have a regular amount of digits in a set of numbers.
For instance, 0001
, 0002
, and 0003
is a good formatting if you think you'll get to have thousands of entries, as you can stay at four digits up to 9999
.
# Define your number
number = 1
two_digits = f'{number:02d}'
# 01
four_digits = f'{number:04d}'
# 0001
We use the Python formatting helper {my_number:04d}
to enforce a minimum set of digits in our number variable.
This means you can use it to set the value of a string or to create or print a longer string with that number, not necessarily having to store its value.
a_number = 42
print(f'The number is {a_number:06d}.')
# The number is 000042.
print(f'The number is {512:06d}.')
# The number is 000512.
The e
flag/option of pip
"installs a project in editable mode (i.e. setuptools “develop mode”) from a local project path or a VCS url."
pip install -e .
-e, --editable <path/url>
As described in the expanded command flag, -e
stands for editable.
Today I learned you can use the plus (+) operator to concatenate or extend lists in Python.
Say you have two lists.
list_a = [1, 2, 3]
list_b = ['Nono', 'MA']
And that you want to create a continuous list with the contents of both, which would look something like [1, 2, 3, 'Nono', 'MA']
.
You can simple add both lists to obtain that result.
>>> combined_list = [1, 2, 3] + ['Nono', 'MA']
>>> combined_list
[1, 2, 3, 'Nono', 'MA']
Of course, it doesn't too much sense in this example because we're explicitly defining the lists and we could define a combined list directly.
combined_list = [1, 2, 3, 'Nono', 'MA']
But it can be useful when we actually need to add lists, for instance to concatenate the results of glob
file listing operations.
>>> from glob import glob
>>> files_a = glob('a/*')
>>> files_a
['a/file.txt', 'a/image.jpg']
>>> files_b = glob('b/*')
>>> files_b
['b/data.json', 'b/profile.jpeg']
>>> all_files = files_a + files_b
>>> all_files
['a/file.txt', 'a/image.jpg', 'b/data.json', 'b/profile.jpeg']
Iterating through a list.
for i in 1 2 3
do
echo $i
done
# 1
# 2
# 3
Iterating through a list generated with a sequence.
for i in $(seq 1 2 10)
do
echo $i
done
# 1
# 3
# 5
# 7
# 9
seq 1 2 10
creates a list of numbers from 1
to 10
in steps of 2
.
According to OpenAI, "embeddings are numerical representations of concepts converted to number sequences, which make it easy for computers to understand the relationships between those concepts."
They introduced a new text and code embeddings API endpoint in January 25, 20221 capable of measuring the relatedness of text strings.
Here's a list of common uses of text embeddings, as listed in OpenAI's documentation.
I look forward to testing this API on my writing to see how well it recommends, classifies, and clusters my mini-essays.
Text and Code Embeddings by Contrastive Pre-Training. OpenAI. Jan 25, 2022. ↩
How to encode an image dataset to reduce its dimensionality and visualize it in the 2D space.
Here's a way to encode a Laravel site request as JSON to log it via Laravel's logging mechanism, using the Log
class from the illuminate/support
package1.
// Log parameters in a get request
Route::get('a-view', function(Request $request) {
\Log::info(json_encode(request()->server()));
return view('your.view');
});
// Log parameters in a get request and redirect
Route::get('redirect', function(Request $request) {
\Log::info(json_encode(request()->server()));
return redirect('/some/page');
});
The service provider of Laravel's Log
class is Illuminate\Support\Facades\Log
. ↩
Here's how to translate 3d points in Python using a translation matrix.
To translate a series of points in three dimensions in Cartesian space (x, y, z) you first need to "homogenize" the points by adding a value to their projective dimension—which we'll set to one to maintain the point's original coordinates, and then multiply our point cloud using NumPy's np.matmul
method by a transformation matrix constructed from a (4, 4) identity matrix with three translation parameters in its bottom row (tx, ty, tz).
Here's a breakdown of the steps.
np.matmul
# translate.py
import numpy as np
# Define a set of Cartesian (x, y, z) points
point_cloud = [
[0, 0, 0],
[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[1, 1, 1],
[1, 2, 3],
]
# Convert to homogeneous coordinates
point_cloud_homogeneous = []
for point in point_cloud:
point_homogeneous = point.copy()
point_homogeneous.append(1)
point_cloud_homogeneous.append(point_homogeneous)
# Define the translation
tx = 2
ty = 10
tz = 100
# Construct the translation matrix
translation_matrix = [
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[tx, ty, tz, 1],
]
# Apply the transformation to our point cloud
translated_points = np.matmul(
point_cloud_homogeneous,
translation_matrix)
# Convert to cartesian coordinates
translated_points_xyz = []
for point in translated_points:
point = np.array(point[:-1])
translated_points_xyz.append(point)
# Map original to translated point coordinates
# (x0, y0, z0) → (x1, y1, z1)
for i in range(len(point_cloud)):
point = point_cloud[i]
translated_point = translated_points_xyz[i]
print(f'{point} → {list(translated_point)}')
If you try to serialize a NumPy array to JSON in Python, you'll get the error below.
TypeError: Object of type ndarray is not JSON serializable
Luckily, NumPy has a built-in method to convert one- or multi-dimensional arrays to lists, which are in turn JSON serializable.
import numpy as np
import json
# Define your NumPy array
arr = np.array([[100,200],[300,400]])
# Convert the array to list
arr_as_list = arr.tolist()
# Serialize as JSON
json.dumps(arr_as_list)
# '[[100, 200], [300, 400]]'
You can get tomorrow's date in TypeScript with the Date
class.
// Create a date
const tomorrow = new Date()
// Set date to current date plus 1 day
tomorrow.setDate(tomorrow.getDate() + 1)
// 2022-11-03T09:55:29.395Z
You could change that + 1
to the time delta you want to go backward or into the future.
// Create a date for Jan 2, 2020
const aDate = new Date(Date.parse("2020-01-02"))
// Go back in time three days
aDate.setDate(aDate.getDate() - 3)
new Date(aDate)
// 2019-12-30T00:00:00.000Z
// Go back in time three days
aDate.setDate(aDate.getDate() - 3)
new Date(aDate)
// 2019-12-27T00:00:00.000Z
// Go forward in time forty days
aDate.setDate(aDate.getDate() + 40)
new Date(aDate)
2020-02-05T00:00:00.000Z