Nono.MA

SEPTEMBER 23, 2020

Linters analyze code to catch errors and suggest best practices (using the abstract syntax tree, or AST). (Function complexity, syntax improvements, etc.)

Formatters fix style. (Spacing, line jumps, comments, etc.)

SEPTEMBER 11, 2020

"Less than 50 days after the release YOLOv4, YOLOv5 improves accessibility for realtime object detection." Read the Roboflow post.

LAST UPDATED SEPTEMBER 16, 2020

Here are resources that are helping me get started with machine learning, and a few that I would have loved to have known about earlier. I'll probably be updating this page with new resources from time to time.

Stanford Cheat Sheets

A summary of terms, algorithms, and equations. (I barely understand the equations.=) These sheets, developed by Afshine and Shervine Amidi, differentiate between artificial intelligence (AI), machine learning (ML), and deep learning (DL) but many concepts overlap with each other. See this Venn diagram.

Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

I highly recommend this book I'm going through at the moment, written by an ex-Googler who worked in YouTube's video-classification algorithm. It's dense but it introduces you to all relevant artificial intelligence, machine learning, and deep learning concepts, and guides you through preparing custom datasets to train algorithms, a bit of data science I guess. At the same time, it introduces you to three of the most-used machine learning frameworks—Sci-Kit Learn, Keras, and TensorFlow, being this last one the one I use on my day-to-day job developing and releasing machine learning models for production. Similar frameworks are Caffe or PyTorch, this one being used by Facebook developers. (Thanks for Keith Alfaro for the recommendation.)

Open-source code and tutorials

I got started with machine learning by trying open-source algorithms. It's common to visit the GitHub repository corresponding to a paper and give it a try. Two examples are Pix2Pix (2016) and EfficientDet (2020). You try to use their code as is, then try to use a custom dataset for training and see how the model performs for your needs.

TensorFlow re-writes many of these models and makes easy-to-follow tutorials.

  • Pix2Pix in TensorFlow Core - Made by the Google TensorFlow team, this tutorial offers you to View the code on GitHubDownload the Jupyter Notebook (written in Python) or Run the Notebook in Google Colab (where you can press a button in the cloud and see how each piece of Python code runs to understand the different parts of setting up and training an algorithm. Reading the dataset, peparing the training and validation set, creating the model, training it, and more).
  • TensorFlow tutorials - This is a good place to get your hands dirty. While machine learning has a strong theoretical component you can leave that aside and start by training and testing models for image classification, object detection, semantic image segmentation, and a lot more tasks.

Friendly user interfaces

  • Runway - A friend of mine, Cristóbal Valenzuela, is building his own machine learning platform for creatives. It's the place for people who don't know how to code (or don't want to) to be able to use complex machine learning models, training them with custom data and deploying them to the cloud. Here's an interview where he told me about the beginnings of Runway.
  • Machine Learning for Designers Talk - A talk I gave talking about these types of interfaces, a few projects, and the role they play for designers and people who don't know how to code.

Courses

Other resources

  • TensorFlow: Tensor and Image Basics - A video with basic tensor and image operations in TensorFlow. How to use tensors to encode images and matrices and visualize them.
  • TensorFlow: Visualizing Convolutions - A video to visualize the filters of an image convolution, an operation known for its ability to extract image features in an unsupervised way to perform classification tasks used in convolutional neural networks.
  • Awesome Machine Learning - A big and frequently-updated list of machine learning resources.
  • Suggestive Drawing - This is my Harvard's masters thesis, in which I explore how the collaboration between human and artificial intelligences can enhance the design process.

Found this post useful?

AUGUST 24, 2020

Apache Groovy (Groovy Lang) "is a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity thanks to a concise, familiar and easy to learn syntax. It integrates smoothly with any Java program, and immediately delivers to your application powerful features, including scripting capabilities, Domain-Specific Language authoring, runtime and compile-time meta-programming and functional programming."

AUGUST 13, 2020

While macOS ships with Python 2 by default, you can install set Python 3 as the default Python version on your Mac.

First, you install Python 3 with Homebrew.

brew update && brew install python

To make this new version your default add the following line to your ~/.zshrc file.

alias python=/usr/local/bin/python3

Then open a new Terminal and Python 3 should be running.

Let's verify this is true.

python --version # e.g. Python 3.8.5

How do I find the python3 path?

Homebrew provides info about any installed "bottle" via the info command.

brew info python
# python@3.8: stable 3.8.5 (bottled)
# Interpreted, interactive, object-oriented programming language
# https://www.python.org/
# /usr/local/Cellar/python@3.8/3.8.5 (4,372 files, 67.7MB) *
# ...

And you can find the path we're looking for grep.

brew info python | grep bin
# /usr/local/bin/python3
# /usr/local/opt/python@3.8/libexec/bin

How do I use Python 2 if I need it?

Your system's Python 2.7 is still there.

/usr/bin/python --version # e.g Python 2.7.16

You can also use Homebrew's Python 2.

brew install python@2

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, Python, and macOS.

JUNE 30, 2020

You can measure the time elapsed during the execution of TypeScript commands by keeping a reference to the start time and then subtracting the current time at any point on your program from that start time to obtain the time elapsed between two points in time.

const start = new Date().getTime();

// Run some code..

let elapsed = new Date().getTime() - start;

Let's create two helper functions to get the current time (i.e. now) and the elapsed time at any point from that moment.

// Returns current time
// (and, if provided, prints the event's name)
const now = (eventName = null) => {
    if (eventName) {
      console.log(`Started ${eventName}..`);
    }
    return new Date().getTime();
}

// Store current time as `start`
let start = now();

// Returns time elapsed since `beginning`
// (and, optionally, prints the duration in seconds)
const elapsed = (beginning = start, log = false) => {
    const duration = new Date().getTime() - beginning;
    if (log) {
        console.log(`${duration/1000}s`);
    }
    return duration;
}

With those utility functions defined, we can measure the duration of different events.

// A promise that takes X ms to resolve
function sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
}

// Measure duration (while waiting for 2 seconds)
(async function demo() {
    const waitInSeconds = 2;
    let beginning = now(`${waitInSeconds}-second wait`);
    // Prints Started 2-second wait..
    await sleep(waitInSeconds * 1000);
    elapsed(beginning, true);
    // Prints 2.004s
})();

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, React, and TypeScript.

JUNE 8, 2020

Just came across this machine learning (and TensorFlow) glossary which "defines general machine learning terms, plus terms specific to TensorFlow."

JUNE 2, 2020

In trying to use Artisan::call($command, $arguments) to execute a command exposed by my Laravel package—Folio—I was running into this issue.

The command "folio:clone" does not exist.

My commands were working on the terminal, by calling php artisan folio:clone, for instance, but they were not working programmatically, calling something like this.

Artisan::call('folio:clone 123 "New Title"');

Artisan::command was not a solution as it serves to register commands and not to execute them.

By looking into the FolioServiceProvider.php (the service provider of my own package) I noticed the $this->app->runningInConsole() check. My commands were being registered in the console but were not exposed elsewhere (that is, in the application itself).

I'd guess this is a security and performance measure. Commands that don't need to be available to the Laravel app are not registered for it.

Solution

The solution was simply registering the commands I want to be callable from my Laravel sites outside of the if statement that checks for $this->app->runningInConsole().

While eight commands are only available to run on the console, there's one available to both the console and the application's runtime.

if ($this->app->runningInConsole()) {
    $this->commands([
        \Nonoesp\Folio\Commands\GenerateSitemap::class,
        \Nonoesp\Folio\Commands\MigrateTemplate::class,
        \Nonoesp\Folio\Commands\TextAndTitleToJSON::class,
        \Nonoesp\Folio\Commands\ItemPropertiesExport::class,
        \Nonoesp\Folio\Commands\ItemPropertiesImport::class,
        \Nonoesp\Folio\Commands\ItemRetag::class,
        \Nonoesp\Folio\Commands\InstallCommand::class,
        \Nonoesp\Folio\Commands\CreateUserCommand::class,
    ]);      
}

$this->commands([
    \Nonoesp\Folio\Commands\ItemClone::class,
]);

In my case, I'm the maintainer of the package and could easily work around this limitation by taking the command I want to use in Laravel out of the if statement.

But you can register commands yourself in your app's $commands array in app/Console/Kernel.php. See the following example.

// app/Console/Kernel.php
protected $commands = [
    \Nonoesp\Folio\Commands\CreateUserCommand::class,
];

While the CreateUserCommand is only registered to the console by the package, I can explicitly make it available for my entire application calling it with Artisan::call('folio:user {email} {password}') (which is this command's signature).

Thanks!

I hope you found this useful. Feel free to ping me at @nonoesp, join the mailing list, or check out other Laravel posts and code-related publications.

MAY 14, 2020

I recently got Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition by Aurélien Géron as a recomendation from Keith.

This second version updates all code samples to work with TensorFlow 2, and the repository that accompanies the book—ageron/handson-ml2—is also updated frequently to catch up with the latest updates.

Just the Python notebooks on that GitHub repository are super helpful to get an overall on state-of-the-art machine learning and deep learning techniques, from the basics of machine learning and classic techniques like classification, support vector machines, or decision trees to the latest techniques to code neural networks, customizing and trained them, loading and pre-processing data, natural language processing, computer vision, autoencoders and gans, or reinforcement learning.

MAY 13, 2020

#Graph2Plan

Nice work from Shenzhen, Carleton, and Simon Fraser Universities, titled Graph2Plan: Learning Floorplan Generation from Layout Graphs, along the lines of #HouseGAN. Via @alfarok.

Our deep neural network Graph2Plan is a learning framework for automated floorplan generation from layout graphs. The trained network can generate floorplans based on an input building boundary only (a-b), like in previous works. In addition, we allow users to add a variety of constraints such as room counts (c), room connectivity (d), and other layout graph edits. Multiple generated floorplans which fulfill the input constraints are shown.

Read the paper on Arxiv.

MAY 10, 2020

We propose In-Domain GAN inversion (IDInvert) by first training a novel domain-guided encoder which is able to produce in-domain latent code, and then performing domain-regularized optimization which involves the encoder as a regularizer to land the code inside the latent space when being finetuned. The in-domain codes produced by IDInvert enable high-quality real image editing with fixed GAN models.

MAY 5, 2020

Connect directly to RunwayML models with only a few lines of code to build web apps, chatbots, plugins, and more. Hosted Models live on the web and can be used anytime, anywhere, without requiring RunwayML to be open!

[…]

We've also released a JavaScript SDK alongside the new Hosted Models feature. Use it to bring a Hosted Model to your next project in just 3 lines of code.

APRIL 26, 2020

I managed to make this work by unlinking openssl.

https://github.com/wting/autojump/issues/540

Then reinstalling python.

brew reinstall python@2

I was having this issue when trying to install Google Cloud SDK. After doing the previous steps, I could run the installer without a problem.

./google-cloud-sdk/install.sh

APRIL 19, 2020

David Ha trained SketchRNN with a flowchart dataset. You can test his live demo (mobile friendly) and his multi-prediction demo (not mobile-friendly).

The source code is available on GitHub.

APRIL 18, 2020

Polyscope is a C++ & Python viewer for 3D data like meshes and point clouds. Here's a code sample in Python from their site. (A C++ equivalent is also available at polyscope.run.)

import polyscope as ps

# Initialize polyscope
ps.init()

### Register a point cloud
# `my_points` is a Nx3 numpy array
ps.register_point_cloud("my points", my_points)

### Register a mesh
# `verts` is a Nx3 numpy array of vertex positions
# `faces` is a Fx3 array of indices, or a nested list
ps.register_surface_mesh("my mesh", verts, faces, smooth_shade=True)

# Add a scalar function and a vector function defined on the mesh
# vertex_scalar is a length V numpy array of values
# face_vectors is an Fx3 array of vectors per face
ps.get_surface_mesh("my mesh").add_scalar_quantity("my_scalar", 
        vertex_scalar, defined_on='vertices', cmap='blues')
ps.get_surface_mesh("my mesh").add_vector_quantity("my_vector", 
        face_vectors, defined_on='faces', color=(0.2, 0.5, 0.5))

# View the point cloud and mesh we just registered in the 3D UI
ps.show()

APRIL 11, 2020

nodemon is a tool that helps develop Node.js based applications by automatically restarting the node application when file changes in the directory are detected.

How to install it globally?

npm install -g nodemon

Then you can use it anywhere as nodemon script.js and, every time script.js changes, the execution will restart with the new code, avoiding repetitive calls to node script.js, for instance.

MARCH 31, 2020

Pretty impressed by ImgIX's JSON format. Long story short, it provides a JSON metadata file for each resource you are serving via ImgIX (say, an image or a video) as in the example below just by appending fm=json to any image request.

For this image.

https://nono.imgix.net/img/u/profile-nono-ma.jpg

You'd load the JSON at.

https://nono.imgix.net/img/u/profile-nono-ma.jpg?fm=json

What's nice is that with the imgix.js JavaScript library you can fetch the JSON file for each picture and—before loading any image data—decide how to deliver the image. In this CodePen provided by ImgIX, they crop a set of images to fit a certain aspect ratio (which ends up saving data as they cropped bits of each image are never loaded) avoiding also having to play with image background CSS tricks in order to fit an image to a given aspect ratio (as the image already has it!).

{
    "Exif": {
        "PixelXDimension": 1500,
        "DateTimeDigitized": "2019:11:20 10:43:28",
        "PixelYDimension": 1500,
        "ColorSpace": 65535
    },
    "Orientation": 1,
    "Output": {},
    "Content-Type": "image\/jpeg",
    "JFIF": {
        "IsProgressive": true
    },
    "DPIWidth": 144,
    "Content-Length": "180595",
    "Depth": 8,
    "ColorModel": "RGB",
    "DPIHeight": 144,
    "TIFF": {
        "ResolutionUnit": 2,
        "DateTime": "2019:11:20 11:17:40",
        "Orientation": 1,
        "Software": "Adobe Photoshop CC 2019 (Macintosh)",
        "YResolution": 144,
        "XResolution": 144
    },
    "PixelWidth": 1500,
    "PixelHeight": 1500,
    "ProfileName": "Display"
}

And here's the JavaScript included in that CodePen that does all the magic.

// Demonstrate the use of the `fm=json` parameter to resize images
// to a certain aspect ratio, using ES6.

let ratio = 16 / 9;
let maxSize = 300;

let placeImages = function () {
    jQuery('.imgix-item').each((i, value) => {
        let $elem = jQuery(value);
        // We pull down the image specific by the 'data-src' attribute
        // of each .imgix-item, but append the "?fm=json" query string to it.
        // This instructs imgix to return the JSON Output Format instead of 
        // a manipulated image.
        let url = new imgix.URL($elem.attr('data-src'), { fm: "json" }).getUrl();

        jQuery.ajax(url).success((data) => {
            let newWidth, newHeight;

            // Next, we compute the new height/width params for 
            // each of our images.
            if (data.PixelHeight > data.PixelWidth) {
                newHeight = maxSize;
                newWidth = Math.ceil(newHeight / ratio);
            } else {
                newWidth = maxSize;
                newHeight = Math.ceil(newWidth / ratio);
            }

            // Now, we apply these to our actual images, setting the 'src'
            // attribute for the first time.
            $elem.get(0).src = new imgix.URL($elem.attr('data-src'), {
                w: newWidth,
                h: newHeight,
                fit: "crop"
            }).getUrl();
        })
    });
}

jQuery(document).ready(placeImages);

MARCH 30, 2020

gs -dNOPAUSE -sDEVICE=pdfwrite \
-sOUTPUTFILE=/output/path/combined.pdf \
-dBATCH /input/path/to/pdfs/*.pdf

MARCH 24, 2020

2020.06.03

I've found that if the creation or starting of a notebook takes longer than 5 minutes the notebook will fail, plus re-creating the conda environment every time you start an existing notebook makes the wait really long. Another solution which I'm preferring now is to use these persistent-conda-ebs scripts—on-create.sh and on-start.sh—provided by Amazon Sagemaker as examples. To keep it short, they download Miniconda and create an environment on-create with whatever Python version you choose, you can customize your environment (say, installing Python packages with pip or conda inside of it), and then that environment is persistent across sessions and future starts that will run the on-start script and have your notebook running in 1–2 minutes. Hope that helps! That's the way I'm using lifecycle configurations now.


2020.03.24

Here's something I learned about Amazon SageMaker today at work.

You can create notebook instances with different instance types (say, ml.t2.medium or ml.p3.2xlarge) and use a set of kernels that have been setup for you. These are conda (Anaconda) environments exposed as Jupyter notebook kernels that execute the commands you write on the Python notebook.

What I learned today that I didn't know is that you can create your own conda environment and expose them as kernels so you're not limited to run with the kernels offered by Amazon AWS.

This is the sample environment I setup today. These commands should be run on a Terminal window in a SageMaker notebook but they most likely can run on any environment with conda installed.

# Create new conda environment named env_tf210_p36
$ conda create --name env_tf210_p36 python=3.6 tensorflow-gpu=2.1.0 ipykernel tensorflow-datasets matplotlib pillow keras

# Enable conda on bash
$ echo ". /home/ec2-user/anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc

# Enter bash (if you're not already running in bash)
$ bash

# Activate your freshly created environment
$ conda activate env_tf210_p36

# Install GitHub dependencies
$ pip install git+https://github.com/tensorflow/examples.git

# Now you have your environment setup - Party!
# ..

# When you're ready to leave
$ conda deactivate

How do we expose our new conda environment as a SageMaker kernel?

# Activate the conda environment (as it has ipykernel installed)
$ conda activate env_tf210_p36

# Expose your conda environment with ipykernel
$ python -m ipykernel install --user --name env_tf210_p36 --display-name "My Env (tf_2.1.0 py_3.6)"

After reloading your notebook instance you should see your custom environment appear in the launcher and in the notebook kernel selector.

What if you don't want to repeat this process over and over and over?

You can create a lifecycle configuration on SageMaker that will run this initial environment creation setup every time you create a new notebook instance. (You create a new Lifecycle Configuration and paste the following code inside of the Create Notebook tab.)


#!/bin/bash

set -e

# OVERVIEW
# This script creates and configures the env_tf210_p36 environment.

sudo -u ec2-user -i <<EOF

echo ". /home/ec2-user/anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc

# Create custom conda environment
conda create --name env_tf210_p36 python=3.6 tensorflow-gpu=2.1.0 ipykernel tensorflow-datasets matplotlib pillow keras -y

# Activate our freshly created environment
source /home/ec2-user/anaconda3/bin/activate env_tf210_p36

# Install git-repository dependencies
pip install -q git+https://github.com/tensorflow/examples.git

# Expose environment as kernel
python -m ipykernel install --user --name env_tf210_p36 --display-name My_Env_tf_2.1.0_py_3.6

# Deactivate environment
source /home/ec2-user/anaconda3/bin/deactivate

EOF

That way you won't have to setup each new notebook instance you create. You'll just have to pick the lifecycle you just created. Take a look at Amazon SageMaker notebook instance Lifecycle Configuration samples.

MARCH 20, 2020

Maker.js: Parametric CNC Drawings Using JavaScript

Microsoft recently open sourced, twenty five days ago, Maker.js — a JavaScript library to create drawings on the browser for CNC and laser cutting.

I love the playground site they made to share parametric scripts (say, of a smiley face, a hello, world text, a floor plan, and more). See all demos.

From their website:

  • Drawings are a simple JavaScript object which can be serialized / deserialized conventionally with JSON. This also makes a drawing easy to clone.
  • Other people's Models can be required the Node.s way, modified, and re-exported.
  • Models can be scaled, distorted, measured, and converted to different unit systems.
  • Paths can be distorted.
  • Models can be rotated or mirrored.
  • Find intersection points or intersection angles of paths.
  • Traverse a model tree to reason over its children.
  • Detect chains formed by paths connecting end to end.
  • Get the points along a path or along a chain of paths.
  • Easily add a curvature at the joint between any 2 paths, using a traditional or a dogbone fillet.
  • Combine models with boolean operations to get unions, intersections, or punches.
  • Expand paths to simulate a stroke thickness, with the option to bevel joints.
  • Outline model to create a surrounding outline, with the option to bevel joints.
  • Layout clones into rows, columns, grids, bricks, or honeycombs.

Via @alfarok's GitHub stars.

MARCH 18, 2020

Here are some loose thoughts on what I've been tinkering with for the past months or years.

As of lately, I've been working on Folio—the content-management system this site runs on—to add new features, fix bugs here and there, and make it easier to deploy it across multiple websites—including mine and client sites. The system keeps getting better and better and I repeatedly ask myself whether I'm reinventing the wheel in many areas. I make use of great third-party packages developed with care and thoughtfulness by other developers. It's incredible when they work but awful when other developers stop updating and your code breaks. I now pay more and more attention to those GitHub starts (★) and pick carefully what packages to implement. Software rots.

I've learned a lot about managing my own Linux machines—both from scratch or from an existing image—to have a new site ready within minutes (at least, when I don't hit an unknown problem that steals a few hours from my day). I'm mainly deploying apps with Nginx and Laravel but I've also learned to run and deploy Node.js apps (with PM2), how to serve Docker images, or how to run Python and Golang programs.

I'm trying to thoroughly document all the troubleshooting I go through to not have to dig the internet to fix a bug I've fixed some time before. While it's obvious how to fix a bug you encountered yesterday, some bugs don't show up again for long, and you can save hours of work by keeping good notes.

A recent practice I've started playing with is creating automation files. I'm slowly getting acquainted with "Makefiles," text files that describe commands which are a list of command calls that will be executed after you type make command-name on your Terminal. These commands not only run on Linux machines but on the macOS terminal, so I can run most of my automation scripts both on the desktop and Linux servers. Here's a sample Makefile to setup a Digital Ocean droplet.

I build Folio mainly for myself. There are many systems like it but this one is entirely built by me, and it helps me learn new programming features as well as modern patterns and techniques by browsing other people's code when I use their packages. Many will hate Folio—simply because it runs on PHP—but I believe Laravel is making this programming language great again. (Trust me, I did PhpNuke sites back in 2003 and this is light years ahead.) Laravel feels like an updated version of Ruby on Rails.

I'm migrating most of my sites to Digital Ocean. Their droplet system (without hidden costs) is great. I'm still to see where to put Getting Simple's podcast audio files. A powerful package by the Spatie team makes backing up websites a breeze: I can schedule automatic database and file backups at desired time intervals, even uploading them to multiple disks (such as Digital Ocean Spaces, Dropbox, or Amazon S3).

I've recently started using Imgix to distribute my images and remove that load from website servers. The image-processing REST API they offer makes is flexible and removes many headaches and time lost manually editing images with Photoshop or other applications, might it be to apply simple effects, sharpening, resizing, or even adding watermarks or padding. And their CDN makes their distribution faster.

I rely less and less on TypeKit for web fonts, as I either serve the font files or use Google Fonts. There are also beautiful typefaces from type foundries that I might use soon. (Milieu Grotesque comes to mind.)

A big highlight (that sadly only runs on macOS) is Laravel Valet. After spending months learning how to configure Nginx blocks and crashing the system multiple times I found this simple tool that handles everything for you. There's a bit of reading to do to fully understand what it does but I'd summarize its benefits to two commands: valet link and valet unlink. With one command your computer serves a PHP app at http://app-name.test from the directory in which you run the command and the second command stops serving it. You can quickly register a folder to serve an app (say, with valet link nono) and quickly go to its URL to test the site locally (at http://nono.test). Behind the scenes, Valet uses dnsmasq, php, and nginx. Not having to deal with them on a daily basis makes things easy (even though I like to learn what's behind these systems and how to do it the manual way in case there's a need for it).

Another thing I'm loving are Cron jobs. They can be set either on Linux (as I do on Digital Ocean) or macOS with crontab -e. You also have to learn a bit about how Cron works but the short story is that it lets you schedule tasks: command executions at whatever time interval (or time of the day) you want. For instance, * * * * * curl https://google.com would ping Google every minute. And you can go from every minute to certain times of the day or fractions of the hour. Laravel copes with it by letting you schedule commands with a clean, high-level API (for instance, ->everyMinute() or ->dailyAt('17')). All you do is set a Cron job to execute the Laravel schedule every minute and Laravel decides what commands to run when.

Last but not least, I'd highlight the importance of logging. Most environments have ways to log executions and errors and this is the easiest way to find out what's breaking your system. I've added a route in Folio's administration in order to visualize logs from anywhere and, at a lower level, Nginx lets you log access and errors as well.

I'm constantly learning, and Folio and my website are my playground.

As I started saying, these are loose thoughts of many of the tech I've been exploring over the past year. I'm also learning about Docker, TensorFlow, Runway, and much more, and frequently keeping my notes on Dropbox Paper. Some of my experiments are kept in the open on GitHub, and I've recently started sharing what I'm learning on this YouTube playlist.

What are you tinkering with?

MARCH 18, 2020

try:
    %tensorflow_version 2.x
except Exception:
  pass

import tensorflow as tf

Note that %tensorflow_version is only available in Colab and not in regular Python.

FEBRUARY 26, 2020

I was getting this error in Laravel as I created a new app using the latest version (that is 6.2). Not sure why, the class would work locally but not remotely (on a deployment running in Ubuntu 18.04.3 on DigitalOcean to be precise).

I was using \ResourceBundle::getLocales('') to get a list of valid locales present on a website and then using in_array($needle, $array) to check whether a given locale is valid in PHP.

Here's how I fixed it.

  • composer require symfony/intl to install Symfony's Intl component.
  • Replaced my in_array calls with \Symfony\Component\Intl\Locales::exists($translation).

FEBRUARY 21, 2020

In C#, .NET, and Visual Studio, you can use the Random Class to generate random numbers.

First, you create a random number generator.

var random = new Random(1); // where 1 is our seed

Then you request the next random number (or next random double).

// Next random number
var aRandomNumber = random.Next();

// Next random double
var aRandomDouble = random.NextDouble();

FEBRUARY 13, 2020

Set editor to nano (or your editor of choice).

export EDITOR=nano
crontab -e

Edit the file, for instance, create a folder with a timestamp every minute in your desktop folder.

* * * * * cd ~/Desktop && mkdir `date +\%y\%m\%d_\%H\%M\%S`

FEBRUARY 10, 2020

Let say you stage all your Git changes and then commit them.

git add --all
git commit -m "Edit REDME.md"

There's a typo on REDME — should read README — and we want to "amend" this error.

git commit --amend

The commit --amend command lets you edit the commit message in the vim editor.

You can also change the message by specifying the new message in the command line with the -m argument.

git commit --amend -m "Edit README.md"

As the commit message is part of the commit itself, editing the message alters the commit hash, which means that if you've already pushed a commit to a remote, the remote won't let you push the new edit directly. But you can force that to happen.

git push --force branch-name

FEBRUARY 6, 2020

npx and create-react-app make it easy to create a new app running React and TypeScript.

npx create-react-app my-app --template typescript

Then you go into the folder and run the app.

cd my-app
npm start

You can create a JavaScript (non-TypeScript) app by removing the last bit—--template typescript. And you can also run the app with yarn start.

If, as I was, you're not getting the app working, you might have an older global installation of create-react-app. In my case I installed it with npm install -g create-react-app (which I could verify by running create-react-app -V on the Terminal. To make sure npx uses the latest version of create-react-app you need to uninstall the global version installed with npm.

npm uninstall -g create-react-app

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, React, and TypeScript.

FEBRUARY 5, 2020

git push origin $(git branch | grep \* | cut -d ' ' -f2)

JANUARY 23, 2020

$output = preg_replace('!\s+!', ' ', $input);

From StackOverflow.

OCTOBER 11, 2019

If you want to add the cs files from one Visual Studio solution (or project) into another, without duplicating the cs files or moving them over, you can simply use the Add > Existing file.. option, making sure that instead of selecting the files and clicking Add (which would copy the files over to your project folder duplicating them) you click on the small arrow and click Add As Link. The files will be linked to your project as a reference to the other project, and editing them will change their code for both Visual Studio projects.

Want to see older publications? Visit the archive.

Listen to Getting Simple .